DaedTech

Stories about Software

By

A Blogger Grows Up: DaedTech Year in Review for 2013

Last year, I made a retrospective post, and I titled it “A Blog Grows Up: DaedTech Year in Review for 2012.” I think I did this because a lot of other bloggers seemed to do it. And one of the big themes of last year’s post was that I was becoming a real, non-faking-it blogger. This year, I’m making a retrospective post largely because of my own personal neurosis when it comes to symmetry and gamification. In other words, I’m doing it because it presents an opportunity to evaluate myself and compare metrics.

This sort of odd introspection is telling, and it’s the reason this year’s title is exactly four characters different from last year. One character change is in the year (2013 vs 2012), and the other three differences go to “blogger” instead of “blog.” Last year, the DaedTech blog grew up; this year, I grew up as both a blogger and as a person who maintains a blog. The blog itself isn’t a lot different. I’ve updated a few plugins and changed the theme of the blog to be more mobile-friendly. I won’t win any UX awards, but I’m staying reasonably current in the limited time that I have. There have been no real game changers in terms of the way the blog looks.

And yet, the results from blogging is once again dramatic. In 2011, my followers were in the tens. In 2012, I was proud because I had changed that to numbering in the hundreds. I had gone out and interacted with other people, exchanged ideas, created a social media presence, improved my posting cadence and all sorts of other things that I mentioned in the linked post. I had figured out how to bolster my readership by doing all of the right things — all of the things that an experienced blogger will tell an inexperienced one to do. In a sense, I had faked it until I made it.

This year, I’m humbled and flattered to report, my regular following went from hundreds to thousands. Last year, I was ecstatic to report that 2012 had resulted in a tripling of my traffic, but in 2013, it increased by a factor of 10 from 2012. If the trajectory of readership in 2014 realizes even half of gains of previous years, next year will see DaedTech record a million unique visitors. Whereas last year I attributed this kind of growth to all of the effort that I had put in to figuring out how to attract followers, this year I attribute it to a variety of factors, not the least of which are persistence and luck. I’m fortunate in that some of my posts have become extremely popular and also in that I have a great base readership that’s very supportive. (Thanks, everyone.)

Ebooks and Course Authorship

Last year, I talked about a number of improvements to the blog. These were things like social media interaction buttons, tagging scheme, feed provider, etc. These improvements were invaluable, and they set the stage for the blog to be taken seriously. This year, I focused instead on improving the brand behind the blog. Actually, scratch that “brand” talk. I focused on improving myself and my credentials. I established a relationship with Pluralsight and began authoring content for them. I have two courses that are aimed at a relatively advanced .NET developer, which, in spite of their narrow focus, are highly rated. I am also starting work on a third course about home automation, and I’ve booked arrangements for the Pluralsight author summit later this winter in the hopes of trading tips with other authors to improve my course quality. If you aren’t improving, you’re rotting, as I once said in one of the Expert Beginner posts (more on that later).

Speaking of expert beginners, I also have put out a couple of ebooks. One was a simple republish of the “How to Keep Your Best Programmers” post. The other came as a result of my editor’s and my effort to restructure a series of posts into an actual, polished work intended for the ebook format. Both of these ebooks (and another one coming soon from my “Intro to Unit Testing” series) have been published by my friend Zack through his Chicago, 1071-based startup, Blog Into Book.

In last year’s post, I used an unordered list to outline all of the things I’d done to make DaedTech have a more successful blog. This year, I’ll just say that starting to author courses for Pluralsight and publishing ebooks have been huge successes for me. These things, combined with my own career transition, have significantly impacted the blog. I’ve recently made the jump from a developer and architect to CIO, and I feel that this has helped me offer an interesting voice on subjects running the entire technical gamut.

Expert Beginners and Optimism

A major catalyst for the blog’s popularity has been the Expert Beginner series and subsequent ebook. A fairly large number of you, to whom I am extremely grateful, have supported my by purchasing the ebook. A jaw-dropping number of you have read the series of posts and offered encouragement and positive feedback. I’d be misrepresenting the situation if I said that this series wasn’t the lion’s share of the reason that the blog has grown so much in the last year.

I’m honestly thrilled about this. I’m thrilled that so many people recognize what I’ve described as “expert beginnerism” to be one of the most subtle yet important impediments to progress in our industry. I’m also encouraged to think that I’m not crazy in thinking about this, since it’s resonated with so many people. At the time that I wrote that first post, I seriously thought that I might roundly be dismissed as some kind of malcontent and crank. I thought the expert beginners against whom I was railing dominated our industry to such a degree that my opinion would be unpopular. And yet, it was widely popular, instilling in me a sense of intense optimism that probably seems out of place from someone that wrote such cynical posts. The fact that all of you see expert beginnerism and have disdain for it means that we’re on the right track to stomping it out.

And it isn’t just readership or purchase of the ebook that encourages me. It’s the fact that my Twitter alter-ego, the Expert Beginner, has such an enthusiastic following. It is from this account that I tweet some of the most alarming, depressing, maddening, amazing, infuriating and downright surreal things that I’ve heard in my time in the industry. The fact that hundreds of people commiserate with me, including people with industry influence, cheers me up immensely on a daily basis. The more frustration I express with stupid things I’ve heard in the industry, the more that people express implicit solidarity. It renews my faith in developer-kind.

Blogging Lessons Learned

This year, I’ve learned that nitty-gritty posts about technical details tend to bring hits to the blog, but not regular readers. Regular readers, at least of this blog, seem to be drawn in more by posts about office politics, broad ideas, interactions and other big picture items. However, to have any cred as I write the latter style of post, I think some of the former style of post is necessary, lest people ask, “is he technical?” I need to learn where the sweet spot is here, but the main thing I’ve already learned is that I’m more of an op-ed blogger than one writing instructional posts. At least, that’s what I’m doing when my posts are best received.

Fun Facts

Below are my most popular posts of 2013 in terms of readership. In this post from 2012, there was a dead heat between instructional posts and op-ed style posts. This year, not so much.

  1. How Developers Stop Learning: Rise of the Expert Beginner
  2. The 7 Habits of Highly Overrated People
  3. How to Keep Your Best Programmers
  4. How Software Groups Rot: Legacy of the Expert Beginner
  5. Getting Too Cute with C# Yield Return

Here are the countries in which DaedTech is most popular:

  1. USA
  2. United Kingdom
  3. Canada
  4. India
  5. Germany
  6. Australia
  7. Russia
  8. Sweden
  9. Netherlands
  10. France

This year, I’m omitting the referrals except to mention them anecdotally.  The stats don’t appear to be especially accurate.  For instance, Hacker News referrals don’t always register properly in my analytics for some reason.  I can tell you that the biggest sources of traffic that aren’t regular readers or RSS feeds are Reddit and Hacker News.  I get a good bit of referral traffic from social media as well, and still plenty of hits from googling.

Finally, A Word of Thanks

In the end, I’d just like to thank you all for reading. Everyone has precious few hours in the day, and I’m flattered that you choose to spend some of them reading my blog when there are plenty of other things you could be doing. I’d also like to welcome the influx of traffic resulting from my recent stint at Hacker News and thank those readers for subscribing to the feed; giving my posts a chance; and for all of the retweets, mentions, upvotes and comments. I haven’t necessarily had time to respond to everyone the way I like due to the of volume of traffic/comments, but please know that it is nevertheless very much appreciated.

So thanks again, and have a great 2014!

By

Merry Christmas

I’d just like to wish all of the DaedTech readers that celebrate Christmas a Merry Christmas. I celebrate the holiday myself, and Christmas for me typically involves some travel to see family during the time between Christmas and New Year’s Eve. This year is no exception, so I probably won’t be posting, except perhaps for a retrospective post that I have in the hopper. So, enjoy your holiday time, and look for your regularly scheduled posts to resume after the new year.

By

Bike Sheds, Ducks, and Nuclear Reactors

I learned a new term the other day, and it was great. I enjoyed this so much because it was a term for a concept I was familiar with but for I had never had a word. I haven’t been this satisfied with a new word since “taking pleasure in seeing bad things happen to your enemies” was replaced with “schadenfreude.” I was listening to an episode of the Herding Code podcast from this past summer, and Nik Molnar described something called “bikeshedding.” This colloquialism is derived from an argument made by a man named C. Northcote Parkinson and later called “Parkinson’s Law of Triviality” that more debate within organizations surrounds trivial issues than extremely important ones.

Here’s a quick recap of his argument. He discusses a fictional committee with three items on its agenda: approving an atomic reactor, approving a bike shed, and approving a year’s supply of refreshments for committee meetings. The reactor is approved with almost no discussion since it is so staggeringly complicated and expensive than no one can really wrap their heads around it. A lot more arguing will be done about the bike shed, since committee members are substantially more likely to understand the particulars of construction, the cost of materials, etc. The most arguing, however, will be reserved for the subject of which drinks to have a the next meeting, since everyone understands and has an opinion about that.

This is human nature as I’ve experienced it, almost without exception. I can’t tell you how many preposterous meeting discussions have shaved precious hours off of my life while people argue about the most ridiculous things. One of the most common that comes to mind is people who are about to drive somewhere 20 minutes away spending 5-10 minutes arguing about which streets to take en route.

In the life of a programmer, this phenomenon often has special and specific significance. On the Wikipedia page I linked, I also noticed a reference to a fun Jeff Atwood post about awesome programming terms that had been coined by respondents to an old Stack Overflow question. Take a look at number five, a “duck”:

A feature added for no other reason than to draw management attention and be removed, thus avoiding unnecessary changes in other aspects of the product.

I don’t know if I actually invented this term or not, but I am certainly not the originator of the story that spawned it.

This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to PMs) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn’t, they weren’t adding value.

The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen’s animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the “actual” animation.

Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, “that looks great. Just one thing – get rid of the duck.”

When I saw this, I actually guffawed at my desk. I didn’t do that just because I found this funny — I have actually employed this strategy in the past, and I see that I am not alone in my rank cynicism in a world of Parkinson’s Law of Triviality. It definitely seems that project management types, particularly ones that were never technical (or at least never any good at it), feel a great need to seize upon something that they can understand and offer an opinion on that thing in the form of an edict in order basically to assert some kind of dominance. So, early in my career, when proposing technical plans I developed a habit of dropping a few red herring mistakes into the mix to make sure that obligatory posturing was dispensed with and any remaining criticisms were legitimate — I’d purposely do something like throw in a “we’re not going to be doing any documentation because of a lack of time” to which I would receive the admonition, “documentation is important, so add it in.” “Okie-dokie. Moving on.”

NuclearDuck

It isn’t just pointy-haired fire-hydrant peeing that exposes us to this phenomenon, however. We, as developers, are some of the worst offenders. The Wikipedia article also offers up something called “Wadler’s Law,” which appears to be corollary to Parkinson’s in that it talks about developers being more likely to argue over language syntax than semantics. In other words, you’ll get furious arguments over whether to use underscores between words in function names, but often hear crickets when you ask if the function is part of a consistent broader abstraction. My experience aligns with this as well. I can think of so, so many depressing code reviews that were all about “that method doesn’t have a doc comment” or “why are you using var” or “alphabetize your includes.” I’d offer up things like “let’s look at how cohesive the types are in this namespace,” and, again, crickets.

The great thing about opinions is that they’re an endlessly renewable, free resource. You can have as many as you like about anything you like, and no one can tell you not to (at least in societies that aren’t barbarically oppressive). But what isn’t endless is people’s interest in your opinions. If you’re the person that’s offering one up intensely and about every subject as the conversation drifts from code to personal finances to football to craft beers, you’re devaluing your own currency. As you discuss and debate, be mindful of the stakes at play and be sparing when it comes to how many times you sit at the penny slots. After all, you don’t want to be the one remembered for furiously debating camel case and coffee flavors while rubber stamping the plans for Chernobyl.

By

Beware of The Magnetars in Your Codebase

Lately, I’ve been watching a lot of “How the Universe Works” and other similar shows about astronomy. I’ve been watching them a lot, as in, I think I have some kind of problem. I want to watch them and find them fascinating and engaging and yet I also seem suddenly to be unable to fall asleep without them on.

Last night, I was indulging this strange problem when I saw what has to be the single most intense thing in the universe: a magnetar. Occasionally when a massive star runs out of fuel in its core, it explodes into as a supernova and spews matter and radiation everywhere, sending concussive shock waves hurtling out into the universe. In the aftermath, the rest of the star that doesn’t escape out collapses in on itself into an unimaginably dense thing called a “neutron star,” which is the size of Manhattan but weighs as much as the sun (for perspective, a sugar cube of neutron star would weigh as much as all of the people on earth combined).

One particularly exotic type of neutron star is called a magnetar. It’s a neutron star with a magnetic field of absolutely mind-boggling strength and a crust made out of solid iron (but up to 10 billion times stronger than steel, thanks to the near-black-hole-like gravity of the star crushing imperfections out of the crystals that form the crust). A magnetar is so intensely magnetized that if the moon were a magnetar (and forget the gravity for a moment) it would tear the watch off of your wrist and render your credit cards useless. This thing rotates many times per second, whipping its magnetic field into a frenzy and sloshing the ultra-dense neutron goo that makes up its core into a froth until the internal pressure causes something called a “starquake,” which, if it were measured on a the Richter scale, would be a 32. When these starquakes happen, the result is that the magnetar spews a torrent of radiation so powerful that it has a profound effect on the earth’s magnetic field and atmosphere from halfway across the Milky Way.

So to recap, a magnetar is a tiny thing leftover from a huge event that’s not really visible or particularly noticeable from a distance. At least, it isn’t noticeable until the unimaginable destructive force roiling in its bowels is randomly unleashed, and then it pretty much annihilates anything in its close vicinity and has a profound effect universally.

Magnetar

Image courtesy of wikipedia

I was idly thinking about this concept today while looking at some code, and I realized something. How many projects do you work on where there’s some kind of scramble or to get some new feature in ahead of schedule, to absorb scope creep and last minute changes, or to slam some kind of customization into production for a big client with a minimum of testing? Whether this goes well or poorly, the result is generally spectacular.

And when the dust settles and everyone has taken their two or three weeks off, come down from the ledge and breathed a sigh of relief, the remnants of the effort is often some quiet, dense, unapproachable and dangerous bit of code pulsing in the middle of your code base. You don’t get too near it for fear that it will tear the watch off of your wrist or result in a starquake — okay, more accurately, that it will introduce some nasty regression bug — and you just kind of leave it there to rotate and pulse ominously.

Much later, when you’ve pretty well forgotten it, it erupts and unleashes a torrent of devastation into your team’s life. One day you suddenly recall (one day too late) that if you don’t log into that one SQL server box and restart that scheduled task on any March 1st not in a leap year, all 173,224 users of the Initrode account are suddenly unable to log into anything in their ERP system, and they’re planning a shipment of medical supplies to hurricane victims and abused puppies. You’ve had all of the atoms in your organization pulverized out of existence by the flare of a magentar in your code base.

How do you avoid this fate? I’ll give you a list of two:

  1. Do the right thing now.
  2. Push back against creating the situation in the first place.

The first one is the more politically tenable one in organizations. The business is going to do what the business is going to do, and that’s to allow sales to promise clients a cure for cancer by June 15th if they promise to pitch in personally for steak dinners for the dev team, on their honor. It can be hard to push back against it, so what you can do is ride the storm out and then make sure you carve out time to repair the damage when the dust settles. Don’t let that rogue task threaten your very existence once a year (but not in leap years). And don’t cop out by documenting it on a wiki somewhere. Do the right thing and write some code that automates whatever it is that should trigger it to happen. While you’re at it, automate some sort of reminder scheme for monitoring purposes and some fault tolerance, since this seems pretty important. You may have needed to hack something out to meet your deadline, but there’s nothing saying you have to live with that and let it spin and pulse its way to bursting anger.

The better solution, though, is to push back on the business and not let supernovae into your development process in the first place. This is hard, but it’s the right path. Instead of disarming volatile things that you’ve introduced in a pinch, avoid introducing them altogether. Believe it or not, this is a skill that actually takes practice because it involves navigating office-political terrain beyond simply explaining things to someone in rational fashion and prevailing upon their good judgment.

But really, I could boil these two points down to one single thing that logically implies both: care about the fate of the project and the codebase. If you invest yourself in it and truly care about it, you’ll find that you naturally avoid letting people introduce explosive forces in the first place. You certainly don’t allow alien, stealth deathbombs to fester in it, waiting to spew radiation at you. Metaphorical radiation, that is. Unless you code for a nuclear power company. Then, real radiation.

By

The 7 Habits of Highly Overrated People

I remember having a discussion with a more tenured coworker, with the subject being the impending departure of another coworker. I said, “man, it’s going to be rough when he leaves, considering how much he’s done for us over the last several years.” The person I was talking to replied in a way that perplexed me. He said, “when you think about it, though, he really hasn’t done anything.” Ridiculous. I immediately objected and started my defense:

Well, in the last release, he worked on… that is, I think he was part of the team that did… or maybe it was… well, whatever, bad example. I know in the release before that, he was instrumental in… uh… that thing with the speed improvement stuff, I think. Wait, no, that was Bill. He did the… maybe that was two releases ago, when he… Holy crap, you’re right. He doesn’t do anything!

How did this happen? Meaning, how did I get this so wrong? Am I just an idiot? It could be, except that fails as an explanation for this particular case because the next day` I talked to someone who said, “boy, we’re sure going to miss him.” It seemed I was not alone in just assuming that this guy had been an instrumental cog in the work of the group when he had really, well, not been.

In the time that has passed since that incident, I’ve paid attention to people in groups and collaborating on projects. I’ve had occasion to do this as a team member and a team lead, as a boss and a line employee, as a consultant and as a team member collaborating with consultants, and just about everything else you can think of. And what I’ve observed is that this phenomenon is not a function of the people who have been fooled but the person doing the fooling. When you look at people who wind up being highly overrated, they share certain common habits.

If you too want to be highly overrated, read on. Being overrated can mean that you’re mediocre but people think that you’re great, or it can mean that you’re completely incompetent but nestle in somewhere and go unnoticed, doing, as Peter Gibbons in Office Space puts it, “just enough not to get fired.” The common facet is that there’s a sizable deficit between your actual value and your perceived value — you appear useful while actually being relatively useless. Here’s how.

TomSawyer

1. “Overcommunicate”

I’m putting this term in quotes because it was common enough at one place I worked to earn a spot on a corporate BS Bingo card, but I’ve never heard it anywhere else. I don’t know exactly what people there meant by it, and for all I know, neither do they, so I’m going to reappropriate it here. If you want to seem productive without doing anything useful, then a great way to do so is to make lots of phone calls, send lots of emails, create lots of memos, etc.

A lot of people mistake activity for productivity, and you can capitalize on that. If you send one or two emails a day, summarizing what’s going on with a project in excruciating detail, people will start to think of you as that vaguely annoying person who has his fingers on the pulse all of the time. This is an even better strategy if you make the rounds, calling and talking to people to get status updates as to what they’re doing before sending an email.

Now, I know what you’re thinking — that might actually be productive. And, well, it might be, nominally so. But do you notice that you’ve got a very tangible plan of action here and there’s been no mention of what the project actually involves? A great way to appear useful without being useful is engage heavily in an activity completely orthogonal to the actual goal.

2. Be Bossy and Critical

Being an “overcommunicator” is a good start, but you can really drive your phantom value home by ordering people around and being hypercritical. If your daily (or hourly) status report is well received, just go ahead and start dropping instructions in for the team members. “We’re getting a little off schedule with our reporting, so Jim, it’d be great if you could coordinate with Barbara on some checks for report generation.” Having your finger on the pulse is one thing, but creating the pulse is a lot better. Now, you might wind up getting called out on this if you’re in no position of actual authority, but I bet you’d be surprised how infrequently this happens. Most people are conflict avoiders and reconcilers and you can use that to your advantage.

But if you do get called out (or even if you don’t), just get hypercritical. “Oh my God, Jim and Barbara, what is up with the reports! Am I going to have to take this on myself?!” Don’t worry about doing the actual work yourself — that’s not part of the plan. You’re just making it clear that you’re displeased and using a bit of shaming to get other people to do things. This shuts up people inclined to call you out on bossiness because they’re going to become sidetracked by getting defensive and demonstrating that they are, in fact, perfectly capable of doing the reports.

3. Shamelessly Self Promote

If a deluge of communication and orders and criticisms aren’t enough to convince people how instrumental you are, it never hurts just to tell them straight out. This is sort of like “fake it till you make it” but without the intention of getting to the part where you “make it.” Whenever you send out one of your frequent email digests, walk around and tell people what hard work it is putting together the digests and saying things like, “I’d rather be home with my family than staying until 10 PM putting those emails together, but you know how it is — we’ve all got to sacrifice.” Don’t worry, the 10:00 part is just a helpful ‘embellishment’ — you don’t actually need to do things to take credit for them (more on that later).

Similarly, if you are ever subject to any criticisms, just launch a blitzkrieg of things that you’ve done at your opponent and suggest that everyone can agree how awesome those things are. List every digest email you’ve sent over the last month, and mention the time you sent each one. By the fifth or sixth email, your critic will just give up out of sheer exasperation and agree that your performance has been impeccable.

4. Distract with Arguments about Minutiae

If you’re having trouble making the mental leap to finding good things about your performance to mention, you can always completely derail the discussion. If someone mentions that you haven’t checked in code in the last month, just point out that in the source control system you’re using, technically, “check in” is not the preferred verbiage. Rather, you “promote code.” The distinction may not seem important, but the importance is subtle. It really goes to the deeper philosophy of programming or, as some might call it, “the art of software engineering.” Now, when you’ve been doing this as long as I have, you’ll understand that code promotions… ha! You no longer have any idea what we were talking about!

This technique is not only effective for deflecting criticism but also for putting the brakes on policy changes that you don’t like and your peers getting credit for their accomplishments. Sure, Susan might have gotten a big feature in ahead of schedule, but a lot of her code is using a set of classes that some have argued should be deprecated, which means that it might not be as future-proof as it could. Oh, and you’ve run some time trials and feel like you could definitely shave a few nanoseconds off of the code that executes between the database read and the export to a flat file.

5. Time It So You Look Good (Or Everyone Else Looks Bad)

If you ever wind up in the unfortunate position of having to write some code, you can generally get out of it fairly easily. The most tried and true way is for the project to be delayed or abandoned, and you can do your part to make that happen while making it appear to be someone else’s fault. One great way to do that is to create a huge communication gap that appears to be everyone’s fault but yours.

Here’s what I mean. Let’s say that you’re working with Bill and Bill goes home every night at 6:00 PM. At 6:01, send Bill an email saying that you’re all set to get to work, but you just need the class stub from him to get started. Sucker. Now 15 hours are going to pass where he’s the bottleneck before he gets in at 9:00 the next morning and responds. If you’re lucky and you’ve buried him in digest emails, you might even get an extra hour or two.

If Bill wises to your game and stays a few extra minutes, start sending those emails at like 10:00 PM from home. After all, what’s it to you? It takes just as little effort not to work at 6:00 as it does at 10:00. Now, you’ve given up a few hours of response time, but you’re still sitting pretty at 11 hours or so, and you can now show people that you work pretty much around the clock and that if you’re going to be saddled with an idiot like Bill that waits 12 hours to get you critical information, you pretty much have to work around the clock.

6. Plan Excuses Ahead of Time

This is best explained with an example. Many years ago, I worked as lead on a project with an offshore consultant who was the Rembrandt of pre-planned excuses. This person’s job title was some variant of “Software Engineer” but I’m not sure I ever witnessed software or engineering even attempted. One morning I came in and messaged him to see if he’d made progress overnight on a task I’d set him to work on. He responded by asking if I’d seen his email from last night. I hadn’t, so I checked. It said, “the clock is wrong, and I can’t proceed — please advise.”

After a bit of back and forth, I came to realize that he was referring to the clock in the taskbar on his desktop. I asked him how this could possibly be relevant and what he told me was that he wasn’t sure how the clock being off might affect the long-running upload that was part of the task, and that since he wasn’t familiar with Slackware Linux, he didn’t know how to adjust the clock. I kid you not. A “software engineer” couldn’t figure out how to change the time on his computer and thought that this time being wrong would adversely affect an upload that in no way depended on any kind of timestamp or notion of time. That was his story, and he was sticking with it.

And it is actually perfect. It’s exasperating but unassailable. After all, he was a “complete expert in Windows and several different distributions of Linux,” but Slackware was something he hadn’t been trained in, so how could he possibly be expected to complete this impossible task without me giving him instructions? And, going back to number five, where had I been all night, anyway? Sleeping? Pff.

7. Take Credit in Non-Disprovable Ways

The flip side of pre-creating explanations for non-productivity so that you can sit back in a metaphorical hammock and be protected from accusations of laziness is to take credit inappropriately, but in ways that aren’t technically wrong. A good example of this might be to make sure to check in a few lines of code to a project that appears as though it will be successful so that your name automatically winds up on the roster of people at the release lunch. Once you’re at that lunch, no one can take that credit away from you.

But that’s a little on the nose and not overly subtle. After all, anyone looking can see that you added three lines of white space, and objective metrics are not your friends. Do subjective things. Offer a bunch of unsolicited advice to people and then later point out that you offered “leadership and mentoring.” When asked later at a post mortem (or deposition) whether you were a leader on the project, people will remember those moments and say, grudgingly and with annoyance, “yeah, I guess you could say that.” And, that’s all you’re after. If you’re making sure to self-promote as described in section three, all you really need here is a few people that won’t outright say that you’re lying when asked about your claims.

Is This Really For You?

Let me tell you something. If you’re thinking of doing these things, don’t. If you’re currently doing them, stop. I’m not saying this because you’ll be insufferable (though you will be) and I want to defend humanity from this sort of thing. I’m offering this as advice. Seriously. These things are a whole lot more transparent than the people who do them think they are, and acting like this is a guaranteed way to have a moment in life where you wonder why you’ve bounced around so much, having so much trouble with the people you work with.

A study I read once of the nature of generosity said that appearing generous conferred an evolutionary advantage. Apparently generous people were more likely to be the recipients of help during lean times. It also turned out that the best way to appear generous was actually to be generous since false displays of generosity were usually discovered and resulted in ostracism and a substantially worse outcome than even simply being miserly. It’s the same thing in the workplace with effort and competence. If you don’t like your work or find it overwhelming, then consider doing something else or finding an environment that’s more your speed rather than being manipulative or playing games. You and everyone around you will be better off in the end.

By

Wasted Talent: The Tragedy of the Expert Beginner

Back in September, I announced the Expert Beginner e-book. In that same post, I promised to publish the conclusion to the series around year-end, so I’m now going to make good on that promise. If you like these posts, you should definitely give the e-book a look, though. It’s more than just the posts strung together — it shuffles the order, changes the content a touch, and smooths them into one continuous story.

But, without further ado, the conclusion to the series:

The real, deeper sadness of the Expert Beginner’s story lurks beneath the surface. The sinking of the Titanic is sharply sad because hubris and carelessness led to a loss of life, but the sinking is also sad in a deeper, more dull and aching way because human nature will cause that same sort of tragedy over and over again. The sharp sadness in the Expert Beginner saga is that careers stagnate, culminating in miserable life events like dead-end jobs or terminations. The dull ache is endlessly mounting deficit between potential and reality, aggregated over organizations, communities and even nations. We live in a world of “ehhh, that’s probably good enough,” or, perhaps more precisely, “if it ain’t broke, don’t fix it.”

There is no shortage of literature on the subject of “work-life balance,” nor of people seeking to split the difference between the stereotypical, ruthless executive with no time for family and the “aim low,” committed family type that pushes a mop instead of following his dream, making it so that his children can follow theirs. The juxtaposition of these archetypes is the stuff that awful romantic dramas starring Katherine Heigl or Jennifer Lopez are made of. But that isn’t what I’m talking about here. One can intellectually stagnate just as easily working eighty-hour weeks or intellectually flourish working twenty-five-hour ones.

I’m talking about the very fabric of Expert Beginnerism as I defined it earlier: a voluntary cessation of meaningful improvement. Call it coasting or plateauing if you like, but it’s the idea that the Expert Beginner opts out of improvement and into permanent resting on one’s (often questionable) laurels. And it’s ubiquitous in our society, in large part because it’s encouraged in subtle ways. To understand what I mean, consider institutions like fraternities and sororities, institutions granting tenure, multi-level marketing outfits, and often corporate politics with a bias toward rewarding loyalty. Besides some form of “newbie hazing,” what do these institutions have in common? Well, the idea that you put in some furious and serious effort up front (pay your dues) to reap the benefits later.

This isn’t such a crazy notion. In fact, it looks a lot like investment and saving the best for last. “Work hard now and relax later” sounds an awful lot like “save a dollar today and have two tomorrow,” or “eat all of your carrots and you can enjoy dessert.” For fear of getting too philosophical and prying into religion, this gets to the heart of the notion of Heaven and the Protestant Work Ethic: work hard and sacrifice in the here and now, and reap the benefits in the afterlife. If we aren’t wired for suffering now to earn pleasure later, we certainly embrace and inculcate it as a practice, culturally. Who is more a symbol of decadence than the procrastinator–the grasshopper who enjoys the pleasures of the here and now without preparing for the coming winter? Even as I’m citing this example, you probably summon some involuntary loathing for the grasshopper for his lack of vision and sobriety about possible dangers lurking ahead.

A lot of corporate culture creates a manufactured, distorted version of this with the so-called “corporate ladder.” Line employees get in at 8:30, leave at 5:00, dress in business-casual garb, and usually work pretty hard or else. Managers stroll in at 8:45 and sometimes cut out a little early for this reason or that. They have lunches with the corporate credit card and generally dress smartly, but if they have to rush into the office, they might be in jeans on a Thursday and that’s okay. C-level executives come and go as they please, wear what they want, and have you wear what they want. They play lots of golf.

There’s typically not a lot of illusion that those in the positions of power work harder than line employees in the sense that they’re down operating drill presses, banging out code, doing data entry, crunching numbers, etc. Instead, these types are generally believed to be the ones responsible for making the horrible decisions that no one else would want to make and never being able to sleep because they are responsible for the business 24/7. In reality, they probably whack line employees without a whole lot of worry and don’t really answer that call as often as you think. Life gets sweeter as you make your way up, and not just because you make more money or get to boss people around. The C-level executives…they put in their time working sixty-hour weeks and doing grunt work specifically to get the sweet life. They earned it through hard work and sacrifice. This is the defining narrative of corporate culture.

But there’s a bit of a catch here. When we culturally like the sound of a narrative, we tend to manufacture it even when it might not be totally realistic. For example, do we promote a programmer who pours sixty hours per week into his job for five years to manager because he would be good at managing people or because we like the “work hard, get rewarded” story? Chicken or egg? Do we reward hard work now because it creates value, or do we manufacture value by rewarding it? I’d say, in a lot of cases, it’s fairly ingrained in our culture to be the latter.

In this day and age, it’s easy to claim that my take here is paranoid. After all, the days of fat pensions and massive union graft have fallen by the wayside, and we’re in some market, meritocratic renaissance, right? Well, no, I’d argue. It’s just that the game has gotten more distributed and more subtle. You’ll bounce around between organizations, creating the illusion of general market merit, but in reality, there is a form of subconscious collusion. The main determining factor in your next role is your last role. Your next salary will probably be five to ten percent more than your last one. You’re on the dues-paying train, even as you bounce around and receive nominally different corporate valuations. Barring aberration, you’re working your way, year in and year out, toward an easier job with nicer perks.

But what does all of this have to do with the Expert Beginner? After all, Expert Beginners aren’t CTOs or even line managers. They’re, in a sense, just longer-tenured grunts that get to decide what source control or programming language to use. Well, Expert Beginners have the same approach, but they aim lower in the org chart and have a higher capacity for self-delusion. In a real sense, management and executive types are making an investment of hard work for future Easy Street, whereas Expert Beginners are making a more depressing and less grounded investment in initial learning and growth for future stagnation. They have a spasm of marginal competence early in their careers and coast on the basis of this indefinitely, with the reward of not having to think or learn and having people defer to them exclusively because of corporate politics. As far as rewards go, this is pretty Hotel California. They’ve put in their time, paid their dues, and now they get to reap only the meager rewards of intellectual indolence and ego-fanning.

In terms of money and notoriety, there isn’t much to speak of either. The reward they receive isn’t a Nobel Prize or a world championship in something. It’s not even a luxury yacht or a star on the Walk of Fame. We have to keep getting more modest. It’s not a six bedroom house with a pool and a Lamborghini. It’s probably just a run-of-the-mill upper middle class life with one nice vacation per year and the prospect of retiring and taking that trip they’ve always wanted, a visit to Rome and Paris. They’ve sold their life’s work, their historical legacy, and their very existence for a Cadillac, a nice set of woods and irons, a tasteful ranch-style house somewhere warm, and trans-Atlantic flight or two in retirement. And that–that willingness to have a low ceiling and that short-changing of one’s own potential–is the tragedy of the Expert Beginner.

Expert Beginners are not dumb people, particularly given that they tend to be knowledge workers. They are people who started out with a good bit of potential–sometimes a lot of it. They’re the bowlers who start at 100 and find themselves averaging 150 in a matter of weeks. The future looks pretty bright for them right up until they decide not to bother going any further. It’s as if Michael Jordan had decided that playing some pretty good basketball in high school was better than what most people did, or if Mozart had said, “I just wrote my first symphony, which is more symphonies than most people write, so I’ll call it a career.” Of course, most Expert Beginners don’t have such prodigious talent, but we’ll never hear about the accomplishment of the rare one that does. And we’ll never hear about the more modest potential accomplishments of the rest.

At the beginning of the saga of the Expert Beginner, I detailed how an Expert Beginner can sabotage a group and condemn it to a state of indefinite mediocrity. But writ large across a culture of “good enough,” the Tragedy of the Expert Beginner stifles accomplishments and produces dull tedium interrupted only by midlife crises. En masse in our society, they’ll instead be taking it easy and counting themselves lucky that their days of proving themselves are long past. And a shrinking tide lowers all boats.

By

Faking An Interface Mapping in Entity Framework

Entity Framework Backstory

In the last year or so, I’ve been using Entity Framework in the new web-based .NET solutions for which I am the architect. I’ve had some experience with different ORMs on different technology stacks, including some roll your own ones, but I have to say that Entity Framework and its integration with Linq in C# is pretty incredible. Configuration is minimal, and the choice of creating code, model or database first is certainly appealing.

One gripe that I’ve had, however, is that Entity Framework wants to bleed its way up past your infrastructure layer and permeate your application. The path of least resistance, by far, is to let Entity Framework generate entities and then use them throughout the application. This is often described, collectively, as “the model,” and it impedes your ability to layer the application. The most common form of this that I’ve seen is simply to have Entity Framework context and allow that to be accessed directly in controllers, code behind or view models. In a way, this is actually strangely reminiscent of an Active Record except that the in memory joins and navigation operations are a lot more sophisticated. But the overarching point is still the same — not a lot of separation of functionalities and avoidance of domain modeling in favor of letting the database ooze its way into your code.

CakeAndEatItToo

I’m not a fan of this approach, and I very much prefer applications that are layered or otherwise separated in terms of concerns. I like there to be a presentation layer for taking care of presenting information to the user, a service layer to serve as the application’s API (flexibility and for acceptance testing), a domain layer to handle business rules, and a data access layer to manage persistence (Entity Framework actually takes care of this layer as-is). I also like the concept of “persistence ignorance” where the rest of the application doesn’t have to concern itself where persistent data is stored — it could be SQL Server, Oracle, a file, a web service… whatever. This renders the persistence model and ancillary implementation detail which, in my opinion, is what it should be.

A way to accomplish this is to use the “Repository Pattern,” in which higher layers of the application are aware of a “repository” which is an abstraction that makes entities available. Where they come from to be available isn’t any concern of those layers — they’re just there when they’re needed. But in a lot of ways with Entity Framework, this is sort of pointless. After all, if you hide the EF-generated entities inside of a layer, you don’t get the nice query semantics. If you want the automation of Entity Framework and the goodness of converting Linq expressions to SQL queries, you’re stuck passing the EF entities around everywhere without abstraction. You’re stuck leaking EF throughout your code base… or are you?

Motivations

Here’s what I want. I want a service layer (and, of course, presentation layer components) that is in no way whatsoever aware of Entity Framework. In the project in question, we’re going to have infrastructure for serializing to files and calling out to web services, and we’re likely to do some database migration and using of NoSQL technologies. It is a given that we need multiple persistence models. I also want at least two different version of the DTOs: domain objects in the domain layer, hidden under the repositories, and model objects for binding in the presentation layer. In MVC, I’ll decorate the models with validation attributes and do other uniquely presentation layer things. In the WPF world, these things would implement INotifyPropertyChanged.

Now, it’s wildly inappropriate to do this to the things generated by EntityFramework and to have the domain layer (or any layer but the presentation layer) know about these presentation layer concepts: MVC Validation, WPF GUI events, etc. So this means that some kind of mapping from EF to models and vice-versa is a given. I also want rich domain objects in the domain layer for modeling business logic. So that means that I have two different representations of any entity in two different places, which is a classic case for polymorphism. The question then becomes “interface implementation or inheritance?” And I choose interface.

My reasoning here is certainly subject to critique, but I’m a fan of creating an assembly that contains nothing but interfaces and having all of my layer assemblies take a dependency on that. So, the service layer, for instance, knows nothing about the presentation layer, but the presentation layer also knows nothing about the service layer. Neither one has an assembly reference to the other. A glaring exception to this is DTOs (and, well, custom exceptions). I can live with the exceptions, but if I can eliminate the passing around of vacuous property bags, then why not? Favoring interfaces also helps with some of the weirdness of having classes in various layers inherit from things generated by Entity Framework, which seems conspicuously fragile. If I want to decorate properties with attributes for validation and binding, I have to use EF to make sure that these things are virtual and then make sure to override them in the derived classes. Interfaces it is.

Get Ur Dun!

So that’s great. I’ve decided that I’m going to use ICustomer instead of Customer throughout my application, auto-generating domain objects that implement an interface. That interface will be generated automatically and used by the rest of the application, including with full-blown query expression support that gets translated into SQL. The only issue with this plan is that every google search that I did seemed to suggest this was impossible or at least implausible. EF doesn’t support that, Erik, so give it up. Well, I’m nothing if not inappropriately stubborn when it comes to bending projects to my will. So here’s what I did to make this happen.

I have three projects: Poc.Console, Poc.Domain, and Poc.Types. In Domain, I pointed an EDMX at my database and let ‘er rip, generating the T4 for the entities and also the context. I then copied the Entity T4 template to the types assembly, where I modified it. In types, I added an “I” to the name of the class, changed it to be an interface instead of a class, removed all constructor logic, removed all complex properties and navigation properties, and removed all visibilities. In the domain, I modified the entities to get rid of complex/navigation properties and added an implementation of the interface of the same name. So at this point, all Foo entities now implement an identical IFoo interface. I made sure to leave Foo as a partial because these things will become my domain objects.

With this building, I wrote a quick repository POC. To do this, I installed the nuget package for System.Dynamic.Linq, which is a really cool utility that lets you turn arbitrary strings into Linq query expressions. Here’s the repository implementation:

public class Repository<TEntity, TInterface> where TEntity : class, new() where TInterface : class
{
    private PlaypenDatabaseEntities _context;

    /// <summary>
    /// Initializes a new instance of the Repository class.
    /// </summary>
    /// <param name="dbSet"></param>
    public Repository(PlaypenDatabaseEntities context)
    {
        _context = context;
    }

    public IEnumerable<TInterface> Get(Expression<Func<TInterface, bool>> predicate = null)
    {
        IQueryable<TEntity> entities = _context.Set<TEntity>();

        if (predicate != null)
        {
            var predicateAsString = predicate.Body.ToString();
            var parameterName = predicate.Parameters.First().ToString();
            var parameter = Expression.Parameter(typeof(TInterface), predicate.Parameters.First().ToString());
            string stringForParseLambda = predicateAsString.Replace(parameterName + ".", string.Empty).Replace("AndAlso", "&&").Replace("OrElse", "||");
            var newExpression = System.Linq.Dynamic.DynamicExpression.ParseLambda<TEntity, bool>(stringForParseLambda, new[] { parameter });
            entities = entities.Where(newExpression);
        }

        foreach (var entity in entities)
            yield return entity as TInterface;
    }
}

Here’s the gist of what’s going on. I take an expression of IFoo and turn it into a string. I then figure out the parameter’s name so that I can strip it out of the string, since this is the form that will make ParseLambda happy. Along these same lines, I also need to replace “AndAlso” and “OrElse” with “&&” and “||” respectively. The former format is the how expressions are compiled, but ParseLambda looks for the more traditional expression combiners. Once it’s in a pleasing form, I parse it as a lambda, but with type Foo instead of IFoo. That becomes the expression that EF will use. I then query EF and cast the results back to IFoos.

Now, I’ve previously blogged that casting is a failure of polymorphism. And this is like casting on steroids and some hallucinogenic drugs for good measure. I’m not saying, “I have something the compiler thinks is an IFoo but I know is a Foo,” but rather, “I have what the compiler thinks of as a non-compiled code scheme for finding IFoos, but I’m going to mash that into a non-compiled scheme for finding Foos in a database, force it to happen and hope for the best.” I’d be pretty alarmed if not for the fact that I was generating interface and implementation at the same time, and that if I define some other implementation to pass in, it must have any and all properties that Entity Framework is going to want.

This is a proof of concept, and I haven’t lived with it yet. But I’m certainly going to try it out and possibly follow up with how it goes. If you’re like me and were searching for the Holy Grail of how to have DbSet<IFoo> or how to use interfaces instead of POCOs with EF, hopefully this helps you. If you want to see the T4 templates, drop a comment and I’ll put up a gist on github.

One last thing to note is that I’ve only tried this with a handful of lambda expressions for querying, so it’s entirely possible that I’ll need to do more tweaking for some edge case scenarios. I’ve tried this with a handful of permutations of conjunction, disjunction, negation and numerical expressions, but what I’ve done is hardly exhaustive.

Happy abstracting!

Edit: Below are the gists:

  • Entities.Context.tt for the context. Modified to have DBSets of IWhatever instead of Whatever.
  • Entities.tt for the actual, rich domain objects that reside beneath the repositories. These guys have to implement IWhatever.
  • DTO.tt contains the actual interfaces. I modified this T4 template not to generate navigation properties at all because I don’t want that kind of rich relationship definition part of an interface between layers.

By

Decision Points in Programming

I have a sort of personality quirk that causes me to constantly play what I describe to others as the “what-if game.” This is where I have some kind of oddball thought about altering something that we take for granted and imagining how it plays out. Lest you think that I’m engaging in fatuous self-aggrandizing, I’m not talking about some kind of fleeting stoner thought like “what if I had like eight million Doritos and also X-ray vision?!?” I mean that I actually really start to think strange things through in detail.

For example, not too long ago I was in an elevator and thought to myself, “would I ride this elevator if I knew that there was a 1 in 10,000 chance that the elevator would plummet to the bottom of the elevator shaft?” I thought that I would. I was going up a ways and the odds were in my favor. I then thought to myself that this was a rational choice but viscerally insane — why take the chance?

This led to the thought “what if elevators around the world just suddenly had those odds of that outcome for some reason, and it was intrinsic to the nature of elevators?” Meaning, nothing we could do would possibly fix it. Elevators are now synonymous with 1 in 10,000 plummets. How does the world react?

It’s a wild thing to think about, but the predictive possibilities and analysis are endless. First of all, we’ve got all of these tall buildings, so it’s not as though we’d just leave everything in them and become brownstone dwellers. Some brave souls would go up to get things from these buildings and keep playing the odds. The property value of high-rises would immediately plummet, and you’d probably invert the real-estate structure nearly overnight with suburban/country home prices skyrocketing and swanky downtown high-rises becoming where extremely poor people and drug addicts lived (who else would routinely brave the odds?) I think the buildings would still stand because of the sheer amount of elevation required to knock them down and the fact that we actually develop quite a tolerance for risky things (like driving to work every day).

There’d also be odd anthropological effects. I’d imagine that a whole generation of teenage thrill seekers and death defiers would start doing elevator joy rides to prove their mettle. People would develop all kinds of cargo cult ways to stand or sit in the elevators with a mind toward simply surviving the plummets. In fact, perhaps humankind would just become really good at making the plummets survivable. Politically, I’d imagine that a huge wedge issue debate would emerge about freedom to ride elevators versus the sanctity of life or something. I could go on forever about this, but I’ll have mercy and stop now.

The point is that I take these mental trips several times per day, considering a whole variety of topics. Most of the thoughts that emerge are bizarre and beneficial only as exercises in creativity, like the elevator example, but some are genuine ideas for reboots in thinking about our craft. I find the exercise of indulging these mental divergences and quasi-daydreams to be a good way to get the subconscious brain working on perhaps more immediate problems.

So if you’re up for it, I invite you to have a go at this sort of thinking, but perhaps in a more structured sense. At times in the history of programming, decisions were made or ideas proposed that wound up having a profound effect on the industry. Imagine a world where these had gone differently:

  1. Tony Hoare introduced what we later called “the billion dollar mistake” — he implemented the concept of 0 as a null reference. But what if there were no null?
  2. A lot of what we do to this day as programmers has its roots in decisions made for the typewriter: for example, the QWERTY keyboard and using CR/LF for end of line. What if these conventions had been different when the computer started to take off?
  3. Edsger Dijkstra famously swung the tide against the use of GOTO as a programming language construct with a seminal paper. What if it had popularly stuck around to this day and GOTO statements were still something we thought about a lot?
  4. Of the three programming paradigms (structured, object-oriented, and functional), functional is the oldest, but it lied dormant for 40 years or so before gaining serious popularity today. What would the world look like if it had been the most popular from the get-go and stayed that way?
  5. C++ really took OOP mainstream, but it did it in a language that was effectively a superset of C, a non OOP language. This allowed for the continuation of a very procedural style of programming in an OOP language. What if that cut had just been made cleanly?
  6. What if the most popular object oriented languages didn’t have the concept of “static” and everything had to belong to an instance?
  7. What if Javascript had been carefully planned in an enterprise-y way, instead of thrown together in 10 days?
  8. If disk space had been as cheap as it was and the need for stored information rather than calculation had been higher, would the RDBMs as we know it ever have become popular?

Thinking through these things might just be a random exercise in imagination. But, who knows, it may give you an oblique solution to a problem you’ve been mulling over or a different philosophical approach to some aspect of programming. Things that we do, even highly conventional or traditional ones, are always fair game for reevaluation.

By

Create a Windows Share on Your Raspberry Pi

If I had to guess at this blog’s readership demographic, I’d imagine that the majority of readers work mainly in the .NET space and within the Microsoft technology ecosystem in general. My background is a bit more eclectic, and, before starting with C# and WPF full time in 2010, I spent a lot of years working with C++ and Java in Linux. As such, working with a Raspberry Pi is sort of like coming home in a way, and I thought I’d offer up some Linux goodness for any of you who are mainly .NET but interested in the Pi.

One thing that you’ve probably noticed is that working with files on the Pi is a bit of a hassle. Perhaps you use FTP and something like Filezilla to send files back and forth, or maybe you’ve gotten comfortable enough with the git command line in Linux to do things that way. But wouldn’t it be handy if you could simply navigate to the Pi’s files the way you would a shared drive in the Windows world? Well, good news — that’s what Samba is for. It allows your Linux machines to “speak Windows” when it comes to file shares.

Here’s how to get it going on your Pi. This assumes that you’ve setup SSH already.

  1. SSH into your Raspberry Pi and type “sudo apt-get install samba” which will install samba.
  2. Type “y” and hit enter when the “are you sure” prompt comes up telling you how much disk space this will take.
  3. Next do a “sudo apt-get install samba-common-bin” to install a series of utilities and add-ons to the basic Samba offering that are going to make working with it way easier as you use it.
  4. Now, type “sudo nano /etc/samba/smb.conf” to edit, with elevated permissions, the newly installed samba configuration file.
  5. If you go navigate to your pi’s IP address (start, run, “\\piip”), you’ll see that it comes up but contains no folders. That’s because samba is running but you haven’t yet configured a share.
  6. Navigate to the line in the samba configuration file with the heading “[homes]” (line 244 at the time of this writing), and then find the setting underneath that says “browseable = no”. This configuration says that the home directories on the pi are not accessible. Change it to yes, save the config file, and observe that refreshing your file explorer window now shows a folder: “homes.” Cool! But don’t click on it because that won’t work yet.
  7. Now, go back and change that setting back under homes because we’re going to set up a share a different way for the Pi. I just wanted to show you how this worked. Instead of tweaking one that they provide by default, we’re going to create our own.
  8. Add the following to your smb.conf file, somewhere near the [homes] share.PiSamba
  9. Here’s what this sets up. The name of the share is going to be “pi” and we’re specifying that it can be read, written, browsed. We’re also saying that guest is okay (anyone on the network can access it) and that anyone on the network can create files and directories. Just so you know, this is an extremely permissive share that would probably give your IT/security guy a coronary.
  10. Now, go refresh your explorer window on your Windows machine, and viola!
    SambaWindows
  11. If some of the changes you make to samba don’t seem to go through, you can always do “sudo service samba restart” to stop and restart samba to make sure your configuration changes take effect. That shouldn’t have been strictly necessary for this tutorial, but it’s handy to know and always a good first troubleshooting step if changes don’t seem to go through.

And that’s it. You can now edit files on your Pi to your heart’s content from within Windows as well as drag-and-drop files to/from your Pi, just as you would with any Windows network share. Happy Pi-ing!

Acknowledgements | Contact | About | Social Media