Monday, December 15, 2014

Porting To Windows CE - How Hard Could It Be?

What Tiggers Do Best

I've always considered myself someone who is quick on the uptake. If you throw a new challenge at me, I expect that I should quickly engage myself and learn the basics. Soon I will master the material and be unstoppable in future pursuits. There may be setbacks, certainly, but I'll learn from my mistakes and begin to climb the ladder to climb to a peak of any elevation.

A few years ago, I was tasked with just such a challenge. I was asked to take an existing application (one that I was not very familiar with) and port the code to run on Windows CE version 4.2. "How hard could it be?" I remember thinking. Porting code to run on Windows CE, that's what Tiggers do best!

After a bit of  stumbling around, I found Microsoft eMbedded Visual C++ 4.0 available for download. This seemed to be exactly the tool I needed. A bit more searching allowed me to find some service packs and an SDK with support for my target platform. Tools in hand ...or computer, I set off to modify the code.

I began a new Windows CE project and attempted to copy our existing structure using the new tool set. Wielding the power of preprocessor macros, I updated the code. A few functions were not defined. I was forced to use similar functions with the arguments slightly modified. There were a couple of places where I needed to implement my own replacement or grab existing libraries from the web.

All in all, the porting process progressed smoothly. I learned a few lessons and got away from the project reasonably unscathed. My work was installed sometime later and ran without any significant issues for quite some time.

You see, the real thing that Tiggers (i.e. programmers) do best is to learn and adapt to new technology. We are learners, problem solvers, and excellent at pattern recognition, logic, and technological intuition.

The Version Explosion

Some time later, I was tasked with porting the same application to Windows CE 5. With a bit more searching, I was able to find an SDK that allowed me to continue using the old development environment. I now had multiple platform configurations for the same application, but everything was still okay.

It seems that every six months to a year we found the need to support yet another flavor of Windows CE. Beyond Windows CE 5, it seems that every new OS version requires a new development environment. Worse yet, for platform developers, the platform builder requires yet another environment from the application development environment.

The following table was compiled by Werner Willemsens and lists the development tools required for targeting various versions of Windows CE:

VersionPlatform (OS) Builder -> NK.BINSmart Device application
Windows CE 4.xWindows CE Platform Builder 4.xEmbedded Visual Studio 4
Windows CE 5.0Windows CE Platform Builder 5.0Visual Studio 2005 + SDK
Windows CE 6.0Plugin for Visual Studio 2005Visual Studio 2008 + SDK
Windows CE 7.0Plugin for Visual Studio 2005Visual Studio 2008 + SDK
Windows CE 8.0Plugin for Visual Studio 2012Visual Studio 2012 + SDK

Take careful note that you will require the Microsoft Visual Studio "Professional" edition or higher in order to target Windows CE. As an alternative to Visual Studio 2005 in the table above, I was able to find a Windows CE 5.0 Standard SDK for use with Microsoft eMbedded Visual C++ 4.0 SP4.

In the link above, Werner goes on to explain how to limit the number of necessary development environments in far more detail than I wish to cover here. I would like to say a big thank you to Werner and the valuable information provided.

P.S. Did I mention that the tools above are not all compatible with the same desktop versions of Windows?

On CE's Struggles

Through my own experiences with Windows CE, I've developed my own theory as to why the platform struggled to gain traction. The theory is simple and kind of sad: Microsoft's development tools are so poorly strung together as to feel that they are actively fighting the developer.

I developed for Windows CE because our business needs directly required it. After spending many hours researching and following many paths leading to dead ends, I was able to piece together a working development environment. Even after doing so, maintaining an application for an array of different OS versions becomes unpleasant and expensive as you need to buy multiple costly software licenses when it would seem that the newer tools should be compatible with the older OS versions.

If, knowing nothing, I were starting a new project and given the choice between multiple platforms, I would probably not choose the one that feels like pulling teeth even before I write a simple "Hello World" application. I believe that this frustrating barrier to entry contributes significantly to developers looking at alternative choices.

A Faint Light

This is highly subjective, but I have a feeling that with each new iteration, Microsoft is beginning to clean up its act and making Windows CE development simpler and more streamlined. They may finally be turning the corner and arriving at a new golden age of embedded development.

As with many programming tasks, getting started with Windows CE development was much more difficult than I had originally anticipated. That said, if you tread carefully and find the right resources, you can create beautiful applications on this platform, too. Hopefully posting some of the information I've discovered along the way will help make the journey just a bit simpler for the next guy.


Joshua Ganes

Thursday, September 25, 2014

The Cinnamon Twist Alert - Handling Complex Boundary Conditions

The Old Ways Are Not Always Best

A new bug report came in. After reading through the report, the problem was clear. Our system did not completely support single-day batches of over 999 transactions.

The problem boiled down to a single counter used to track transactions throughout the day. The field for the counter has a fixed width of three digits. The sequence counter field begins with 001 and increments by one for each transaction all the way up to 999. This value is used to help identify and correct communication errors. When a terminal attempts to send a request but encounters an error, we resubmit the transaction using the same sequence counter. If the other side receives two similar (duplicate) transaction records with matching sequence counters (more on this later), the earlier of the two submissions is reversed and is not funded.

The problem with the sequence counter becomes obvious when you think about what happens when the terminal needs to go beyond 999 transactions in a day. The sequence counter will spill over the three digits available, reusing a value from earlier in the same batch or the forbidden value 000. Our software was not so silly as to ignore this problem. The solution, as it was implemented, was to increment the batch number by one and reset the sequence counter for the newly-created batch to 001.

Unfortunately for us, one of our clients began frequently exceeding the magic 999 transactions per day and was experiencing problems with this approach. Processing multiple batches on the same day led to reconciliation and accounting issues, while causing delays in the deposits of funds to the client's bank account. Obviously, these were problems we wanted to deal with swiftly, once and for all.

Edit for clarity: Some have asked why I didn't simply increase the width of the sequence counter field to more than three digits. This width was defined in a third party specification and was not under my control. My software had to deal with this somehow.

A New Approach

After consulting the documentation and our technical contacts and running through a couple of false starts, we formulated a new approach to the three-digit sequence counter problem. Instead of creating a new batch to deal with the overflow, we would simply roll the sequence counter from 999 back to 001 and continue processing everything normally.

Our biggest concern with this new approach was related to the special duplicate transaction checking mentioned previously. The duplicate checking logic considers the following criteria when determining whether a subsequent transaction request matches an earlier one:

  • Sequence Counter
  • Card Account Number
  • Total Dollar Amount

If all three of these values match, the earlier of the two transaction requests is silently reversed, leaving the transaction totals out of balance and the client short of money. For some silly reason, people get very upset when their money goes missing unexpectedly.

Mr. Cinnamon Twist

To describe a plausible scenario where I thought this might actually happen, I decided to write a brief
story about a man I dubbed Mr. Cinnamon Twist.

Cinnamon Bun
Cinnamon Twist
Mr. Cinnamon Twist is a businessman with a sweet tooth. Knowing he has a long day of meetings ahead of him, he stops in at the busy corner coffee shop looking for a morning treat. He spies a delicious, gooey cinnamon twist (with double frosting). His growling, empty stomach simply cannot resist. Leaving the shop with cinnamon twist in hand, he takes two bites and wraps up the rest as he hurries to catch a train heading downtown. Through the early morning, Mr. Twist savors his treat as he goes about his work. He prepares his materials for the big afternoon presentation for a prospective new client. After a long and stressful day of work, Mr. Cinnamon Twist boards the train heading towards home. Worn out and feeling exhausted, his mind wanders back to his early morning treat. He decides that he will treat himself to another (just this once) before dragging his tired body home,

In this scenario, it's plausible that our sweet-toothed protagonist used the same credit card to pay the same amount for both a very low and very high sequence number. If all the stars aligned and these transactions happened to reuse the exact same sequence counter, this would mean that Mr. Cinnamon Twist magically received his first treat of the day without being charged for it. Great news for Mr. Twist, not so good for the coffee shop who would be out the cost of a scrumptious cinnamon-flavored treat.

Back Of The Envelope

The first problem with the above scenario is simply noticing it. Detecting this type of situation on the fly is hard enough. With the requirement to be fast, high volume, and redundant between multiple data centers, this becomes complicated very quickly. The second problem is how to correct the situation once an error has been identified. I could think of a few tricks that I might consider, but I saw no obvious trivial approach for this problem.

While trying to avoid tackling this complex condition, I paused to look at some data. How likely is the above scenario? I looked at some rough numbers to try to get an idea. I looked at the number of clients exceeding the magic 999 transaction limit. I looked at the number of transactions using the same card at the same merchant on the same day. Using the classic back-of-the-envelope approach, I calculated that we would likely only see this situation a handful of times in a year.

It seemed to me that we were looking at a lot of complicated and error-prone work to save the cost of a tray of delicious cinnamon treats each year.

The Compromise

As it turns out, there is a manual process available to correct these types of transactions. By picking up the phone and talking to a real live human being, we are able to manually single out a transaction request and force it through.

Knowing that this manual correction process was available and fearing the work required to fully automate every possibility, I proposed a compromise. We would create a scheduled script to run daily and search the database for requests matching the duplicate transaction scenario above. If any duplicates were found, we would fire off an email alert message (subject line: "Cinnamon Twist") identifying the key transaction details and describing the process for manual corrections. The worst case, I thought, was that the alert would fire too often and I would have to implement the complex solution later anyway. The best case, on the other hand, was that the alert would basically never fire, saving a great deal of time and effort.

Sounding The Alarm

The first week after installing my script, there were still no email alerts. I was beginning to feel optimistic that we may never see the alert fire in practice. They say that trouble shows up when you least expect it. The day after sharing my optimism with my coworkers, we received our first Cinnamon Twist alert.

No problem, I thought. We followed the manual procedure only to discover that both transaction requests were good and no corrective action was required. This contradicted the documentation and our general understanding of how the system should work, but who am I to look a gift horse in the mouth?

Another week or two went by before the next alert fired. It seems that my back-of-the envelope calculations were a bit off. We were receiving more alerts than I had expected. This alert, too, turned out to be a false positive when we followed up manually.

We asked our technical contacts for clarification. After our messages got passed around a few times, our contacts eventually got back to us saying that this behavior was by design. It seems that we had worried ourselves over a problem that didn't actually exist.

We disabled the Cinnamon Twist Alert script a short time later. My "lazy" approach had saved me from implementing a lot of complicated logic for no reason.

An Ounce Of Cure

What is my point? What can we learn from these events? Perhaps it's time to spin the wheel of morality to tell us the lesson we should learn.

Maybe I was lucky. My calculations turned out to be somewhat (but not excessively) optimistic. There was a risk that we would need to handle these manual corrections frequently, leaving me scrambling to implement a complex change to relieve pressure from the rest of my team as quickly as possible. My approach was a calculated gamble, but it paid dividends even larger than I had anticipated.

To me, this is a turnabout on the old adage saying, "an ounce of prevention is worth a pound of cure." In this case, an ounce of cure (the alert and manual correction) was quicker, safer, and simpler to implement than a pound of prevention (a fully automated solution). In rare cases, the easiest way to deal with complex boundary conditions is not to. Instead, find a way to look for the errors and tidy up after they happen. Don't forget to calculate the risk and the cost, but you may just discover that you were about to make much ado about nothing.


Joshua Ganes

Wednesday, September 17, 2014

Institutional Knowledge Is The Default

This article is a follow up to my previous post on the topic of institutional knowledge.

Do As I Say, Not As I Do

Please don't interpret my recent post as a claim of personal innocence when it comes to accumulating institutional knowledge. I have completed many projects in my time that are completely devoid of, or seriously lacking in adequate documentation.

I realized long ago that the only way to avoid becoming emotionally paralyzed by constant feelings of inadequacy is to acknowledge my own shortcomings and work hard to improve myself day by day. By staying disciplined and focusing on continuous improvement, my recent projects have been more thoroughly documented than those from only a few years ago.

The Pit Of Despair

Eric Lippert writes about the pit of despair as a place where the traps are easy to fall into and difficult to climb out of. Unfortunately, institutional knowledge fits this description to a tee.

We constantly pick up valuable little nuggets of information as we go about our duties. Sometimes these are technical details about the systems we're working with. Other times, it may simply be the knowledge of who is already an expert in a given area.Tapping into the institutional knowledge of others can be more valuable than struggling to discover everything for yourself.

There is nothing wrong with this knowledge in and of itself. This knowledge can be used to unlock further discoveries and make key decisions that allows us to avoid disasters and achieve success. The problem is that the knowledge is trapped inside a lone individual's head. Without further action, we end up continuously accumulating more and more institutional knowledge. Institutional knowledge is the default and we must act deliberately if we intend to avoid it.

Why We Despair

Knowledge is tremendously valuable. As G.I. Joe has taught us, "knowing is half the battle." This is why distributing institutional knowledge is so important to any group of people working towards a common goal. When knowledge is trapped within a single mind, its potential is limited to that one individual. Time is wasted, uninformed decisions are made, and existing work is duplicated unnecessarily. From a business perspective, institutional knowledge is clearly bad for the bottom line.

I am about to draw a moral line in the sand. Neglecting to share institutional knowledge is regrettable, but intentionally hoarding knowledge to the detriment of the team in order to further one's own selfish ends is reprehensible. This is comparable to the salesman who viciously defends his "territory" from his coworkers to protect his own commissions. Not only does it reduce the collective effectiveness of the team, but it fosters and air of hostility and inhibits sharing important details needed to succeed.

Scaling The Walls

How then, do we climb out of the pit of despair and tiptoe around the pitfalls waiting to drag us back down? I'm no expert on this topic, but I'll share some of the things I do in my attempt to scale the walls and share my knowledge with my coworkers.

One of the best tools available at my workplace for sharing knowledge is our internal company wiki. Any pages I create on the wiki are immediately available to be searched, read, and modified by our entire company. These days, whenever I start a new project I will immediately create a new wiki page describing the basic purpose of the project and how it will work. As I continue to develop the project, I frequently edit the page with the most up-to-date understanding of the available details. As for my writing, I try to follow many of Joel Spolsky's excellent tips for writing functional specifications.

Another great way to ensure you're not accumulating institutional knowledge is to pay attention to the questions people ask. Sometimes people ask lazy questions. When they ask about something you've already covered, simply point them to the relevant documentation. If, on the other hand, they've done their homework and still require missing details or clarification, consider this a flaw in your documentation. Recognize the flaw, modify the documentation, and think about how to improve for the next time around.

On the same token, any time you find yourself asking for assistance, it's a likely sign that someone else has a collection of hidden institutional knowledge. Ask them if there's documentation, and suggest (or insist) that they write some. If nothing else, write down whatever lessons you've learned from your interactions.

Do You Validate?

Words of caution: just because you wrote some documentation, it doesn't mean that it's adequate.

When it comes to documentation, if I can't find it, it doesn't exist. You may have written a 500-page treatise covering every last detail of uses and maintenance of your paper clip system including a full bibliography, glossary, and footnotes on every page. It pays me exactly zero benefit if I can't find the document after giving an honest effort to search for it in all the expected places.

Just because instructions are clear to you, that doesn't necessarily mean that they will be clear to everyone. Each person is familiar with his own style. Things that appear straightforward to you may be ambiguous or unclear to others. Instructions that seem obvious to an experienced user may involve hidden steps unknown to a novice.

A great way to check for these flaws is to ask someone to validate your documentation for you. I find this particularly effective in the case of a documented procedure. In the spirit of hallway usability testing, ask a coworker to start from scratch and try to achieve your documented goal. Watch from a distance and note every time that they get stuck or confused. Later, add additional notes for clarification. Once another person can follow your documentation with minimal fuss, then you can be confident that someone else can perform the task when you're gone.

Still No Expert

As noted previously, I am not to be considered an expert in these matters. Listed above are some tips that I've found useful in sharing my institutional knowledge with my coworkers. What are are the best tips and tricks you have for avoiding the pit of despair and sharing your own institutional knowledge? Tell me in the comments.


Joshua Ganes

Sunday, September 14, 2014

What People Say When You're Gone

Parental Leave

My wife and I are pleased to announce the birth of our second daughter, Isla. She was born at the very end of May and has been providing us with baby snuggles and depriving us of sleep ever since.

I was fortunate enough to be in a position to take a decent length of paternity leave to help my wife with our two young girls and to enjoy some time together as a family. We made good use of our time by showing off Isla to our friends and family scattered across western Canada.

If you are in a position where you can manage and afford to take parental leave, I would urge you to take hold of the opportunity. Getting away from my work routine for a while was a great way to recharge and reflect on my current situation and goals. The precious first months and years of a child's life pass just as fast as the cliches say. Slowing down to experience and savor this special time with my children while I'm able is a privilege that I wouldn't want to pass up.

Returning To Work

When I came back to work, my coworkers greeted me with in a variety of ways. There were those who (hopefully) jokingly told me, "I though you were fired." There were a lot of pleasant and generic, "welcome back", or "how's the family?" responses. There were also a few who genuinely expressed that they missed me and how glad they were that I was back.

Of course, on an emotional level, we all want to be missed. It's a wonderful feeling to know that you're missed and appreciated while you're gone. From a personal standpoint, being missed is great. That got me to thinking about whether I want to be missed on a professional level as well.

Professionally Speaking

Imagine if not one colleague missed you during an extended absence. That would mean they don't need or desire your assistance to do their work, or worse, that you may actually stand in their way. Imagine if your boss didn't miss you either. It would mean that your job is irrelevant or that you are so unproductive that your absence is barely noticed. Either way, it sounds like your job security is in perilous danger. Obviously, you want to be missed at least a bit.

To be successful professionally, you need to become indispensable to your team. I believe that there are two varieties of indispensability -- one good and one bad. Let me illustrate using a couple of examples and see if you agree.

Mr. Smith is indispensable to his team. When the OIU system goes down (as it often does), he is the one who knows just how to diagnose the problem and get things back up and running. Last month when he was on vacation, it took his coworker three days to fix the problem. Mr. Smith can usually sort out issues in a matter of hours. The operations team loves Mr. Smith, because he's always so quick to dive in and troubleshoot their problems as soon as they call.

Mr. Brown is indispensable to his team. He is always ready to lend his expertise to help a colleague solve a technical issue or discuss a design question. His software is always high quality, well documented, and easy to maintain. The junior developers love him because of his valuable mentoring. They prefer to maintain and enhance Mr. Brown's projects because the code is clear, well designed, and easy to work with.

You can probably see where I'm going with this. Mr. Smith and Mr. Brown are both considered indispensable for wildly different reasons. Mr. Smith uses something called "institutional knowledge". Over the years, he has become an expert in the internal systems of his company (institution). This knowledge, while valuable, can sometimes even be hoarded. With all of this valuable information held only in his own head, Mr. Smith essentially holds the information for ransom. He ensures his own job security while maintaining a charade of expertise and talent.

Mr. Brown, on the other hand, tries to offload institutional knowledge. Instead of hoarding it, he documents the details someplace where anyone can easily find it. Sure, it may take an intern new to the project some time to get up to speed, but that's only natural. Instead of banging his head against the wall or interrupting Mr. Brown with unending questions, our intern can simply read and reference the documentation as he stumbles his way through the project. This leaves Mr. Brown free to concentrate on his own work, while empowering others to do great things.

What I Hope They're Saying

I hope I was missed during my absence. I also hope that people weren't asking when I'll be back because I'm the only one who knows about a specific system. Instead, I hope that they were simply discussing their projects, confident in their understanding based on my documentation. I hope that my boss was missing me because I'm the best man for the task, not because everyone else is struggling to keep my work on track.

What do you want people to say when you're gone? Tell me in the comments.


Joshua Ganes

Thursday, May 08, 2014

Desperate Measures - A Time To Hack

A Case Study

Sometimes the features that seem simplest on the surface are the hardest to implement. This statement rang true for me in the past few weeks. In today's post I will walk you through the stages of my latest feature request and the challenges I encountered. This example is far from my typical experience, but is an extreme example that illustrates the kind of problem solving and outside-the-box thinking required of professional software developers.

For a bit of background, my work involves payment card processing in various forms and across a wide range of systems and third-party applications. One of my most successful projects to date has been a full payment solution integrated with a third-party point of sale system. This new feature was an enhancement to that system and required customization of receipt printing as requested by a client.

In a typical setup, the system will print three receipts for each credit card payment: a merchant copy, a customer copy, and an itemized receipt listing all of the items and totals ordered by the customer. Our client must be environmentally friendly (or, perhaps, simply cost-conscious) because they asked to eliminate as much receipt waste as possible. When configured for this new feature, the system would print nothing by default. Instead, it would cache the receipt details for later printing if requested. "Hey, could I get a copy of my receipt?"

At first glance, this feature sounded pretty simple. Creating a cache to store data for later retrieval is the bread and butter of a developer's work with many tried and true solutions. I would have to research the point of sale system scripting language and communications protocols to install a new button that would fetch the receipt details from our cache and print the receipt on demand. This, I figured, would be the challenging part of the project. While challenging in its own right, this paled in comparison to one challenge that I had overlooked.

The overlooked piece was the itemized receipt. These details are normally printed automatically by the third-party system. "No matter", I thought, "there is an API call to access these details". I had used it in a couple of different places previously. It wasn't until I went to implement the API call that I discovered the problem. The API did not work when a payment was in progress or after the check had been closed on the system. This meant that by the time my software got involved with the payment, it was already too late to retrieve the itemized receipt details.

Desperate times call for desperate measures. We contacted the vendor to discuss possible workarounds. After some back and forth discussion, we learned that the system has a print-to-file receipt option. I figured this would be a good option. My software could monitor the receipt file, waiting for new data to be written. I could then parse and cache the latest receipt details for retrieval. I spent some time restructuring the code to monitor a file for updates and scrape out the itemized receipt details.

After struggling to learn how to monitor a file for changes, all went well with my initial test runs. The receipt text was written to the file. My software read the new data and cached it. Hitting the receipt button triggered the system to print out all of the required receipt details. I figured it was just a matter of tidying the code and running it through a battery of tests. I sat back and admired my clever solution to the problem, proud of implementing a reasonably complex solution to my seemingly simple problem. It wasn't until we brought a second machine into the mix that signs of a flaw in my plan appeared.

The point of sale system runs with a central server and multiple client workstations. I had performed the majority of my development work on the server for simplicity. On the server, everything was working seamlessly. On the workstations, we saw random delays from the time of processing to the time the file was actually modified. Sometimes it would take a few seconds. On other tests, the delay climbed nearly to the 1-minute mark. Waiting a whole minute for a receipt simply wouldn't fly with a busy merchant.

Further discussion with our vendor contact led us to conclude that this was simply the nature of the system. The file-based printing was designed as a record-keeping system, not a live processing environment. It was time to go back to the drawing board. Something about the best laid plans...

While discussing the problem with a colleague, I joked about it saying, "maybe we should just write a printer driver." My colleague took my joke a bit more seriously than I had expected and started playing around with the printing configurations on our test lab. He found a printer option identified as TCP/IP printer and pointed it at our software's listen port. We ran a test transaction and saw that some unknown request data had been logged by our software.

Somewhat bewildered and whimsically bemused at what I was about to try, I visited everyone's favorite search engine and started looking around for TCP/IP printer message format specifications. With a bit of trial and error and some luck, I stumbled upon a specification that looked similar to the messages that my application was receiving. "This might just work...", I thought.

According to the specification, the first message I saw was a printer status message. I updated my software to respond with a "ready" response. When I received a second request following my response, I knew we were onto something. I worked through the commands one by one until I saw the first line of the receipt header in our logs - plain as day.

There's something about this particular hack that is both wonderful and horrifying at the same time. I get a thrill out of the fact that we are simulating a printer simply to retrieve the information we need. At the same time, I boggle at the fact that something this simple shouldn't have to be so hard. There is also, unfortunately, a nagging risk that my software will be incompatible with some requests sent to it in the field. We will do our best to reduce this risk, but doing something that's not officially sanctioned and ordained by a vendor is generally a bad idea. Whatever way you slice it, this solution works!

Perhaps my favorite part of this solution is how I imagine the message exchange to determine the printer status:

THIRD PARTY:  Do you have paper?
MY SOFTWARE:  I have a roll long enough to span the country.

THIRD PARTY:  Is the paper door closed?
MY SOFTWARE:  It's glued shut.

THIRD PARTY:  Do you have ink?
MY SOFTWARE:  A well of ink so deep you could swim in it.

THIRD PARTY:  Is your cutter enabled?
MY SOFTWARE:  The blade is sharp enough to cut diamonds.


Why do I have two simultaneous and opposing reactions to this hack? Why would some developers turn their nose up at my solution, while others would describe it by saying, "that's brilliant"?

The software industry has a long tradition of describing both clever and ugly solutions as "hacks". This is, I believe, because most programmers are very calculating people. We tend to have strong feelings about the "correct" solution to a problem. When something deviates drastically from that ideal, we describe it as a hack. At the same time, when there is no correct solution to a problem, the curious among us try to come up with a "clever" solution to work around the barriers set in front of us. We will often describe this as a hack as it deviates from the theoretical ideal solution in the same way.

Whether or not you like what I've done in the case study above, it's hard to argue with results. If an ugly hack is what stands between myself and my goals, I'll choose the hack every time (barring the ethically questionable). When there is no correct way to do things and I can find a workaround, you can bet that I will take the workaround. On the other hand, if there is a correct way to do things that cannot be accomplished under some immediate time pressure, you can be sure that I will want to come back and set things right when I get the chance. Remember, the customer is always right. The client doesn't care about your academic ideals -- they want results.

Quality developers usually need to think inside the box. The best developers will look at standard approaches to their problems. They will use proven technologies and techniques to get things done in a way that is robust and maintainable for the future. The best developers, however, have the ability to think outside the box when necessary. The resourcefulness and ability to bend a software system to your will is what separates the great from the merely good.

What is craziest hack you have seen or implemented? Tell me your story in the comments.


Joshua Ganes

Wednesday, March 12, 2014

My Love / Hate Relationship With Stack Overflow

From Day One

I was an avid reader of Jeff Attwood's Coding Horror blog from the time I left university and ventured into the world of professional software development in 2006. I had just recently discovered the ever insightful Joel On Software by Joel Spolsky when the two bloggers announced a joint venture to create a new resource for programmers to collaborate using a question-and-answer format. Providing a desperately needed service backed by the talent, reputation, and influence of two widely known developers, I had great confidence that this venture would be a success. I am proud to say that while I had nothing to do with the actual coding or design of Stack Overflow, at very least I had my say on the name. I participated in this poll and counted my vote among the 1721 supporters of the eventual winner:

Falling In Love

Perhaps my favorite thing about Stack Overflow (and the family of Stack Exchange sites) is when I try to ask a question that has already been answered by the community. I can quickly find similar questions to my own with all of the answers ranked based on how helpful the community found them. I can't tell you how many times I've paid a visit to the Stack, only to find that my question was already answered clearly enough to proceed immediately with my work.

In the rare case where I have not found the information I was looking for, I have been surprised by just how eager the community was to leap to my rescue. Helpful answers began to flow within minutes of posing my question. Intelligent developers from around the world competed with each other to offer helpful answers to my question in return for nothing more than my gratitude and some (nearly) meaningless reputation points.

The Hatred Begins

There was once a time where open-ended and opinionated questions would be asked and discussed ad nauseam. These questions would divide the community into cliques, each supporting their own point of view and uniting in defiance of all who would disagree. I, personally, found many of these discussions fascinating. It was exciting to see the varying perspectives and how viciously they were defended by their champions.

While some (like myself) loved these types of discussion and debate questions, others wanted these questions to fade away. The most adamant ones wanted them dragged out into the street and shot. They didn't like having questions with multiple conflicting answers or no real answer at all. Eventually, word came down from on high that every question needs to have a real answer.

This requirement did not destroy the world (or the site), but it began to slowly erode part of the community and its nature. Developers, as one might expect, tend to be nerdy. Many of us are pedantic (guilty as charged) and love to enforce stringent rules to the letter of the law. Those who are most eager to do so often find their way to elevated positions of authority also known as moderators. I am under the impression that some of today's Stack Overflow moderators are on a quest to smite questions (be they useful or no) before they can be answered. They close questions as off topic or not constructive or a wide variety of other resolutions if they even hint at straying from the true and acceptable path.

The biggest offenders I've seen are of the form: "what is the best tool for X?" These are usually perfectly sensible questions. I need to accomplish X. I've probably done some searching and found a couple of tools that claim to help with X, but I have no experience with any of them. I attempt to consult the community for its collective experience with these or other tools in the quest for X. I find that these questions often have a couple of brief answers and have been ruthlessly closed as not constructive. The very nature of this type of question makes it subjective, but there is still a lot of valuable information to be shared here. Instead, the community is stifled by an overzealous moderator too eager to close the question to see the tremendous value that its discussion would bring to the site.

The other casualty of this is those active members of the community who lost interest following the change of course. Instead of participating in an exciting discussion with intellectual equals, they were challenged to answer technical questions with correct answers. I guarantee you that some of these members reduced their level of activity on the site as a result.

Sites vs. Tags

My next beef is with the Stack Exchange family of sites. As more and more groups begin to create Q&A communities on Stack Exchange, it becomes harder and harder to find the right site to pose my questions or look for answers. Depending on the nature of my search, there may be a fair number of relevant sites to look at.

Imagine that you run into a library issue while programming a GUI application primarily targeting Ubuntu Linux. Where should you go first? Ask Ubuntu? Unix & Linux? Programmers? Stack Overflow? User Experience? You might think that some of my suggestions are more probable than others. My question is: why should you have to think about it?

Rather than running many similar but separate sites and migrating questions between them, why not put it all in one place and tag each question as belonging to multiple relevant categories? Stack Exchange can still maintain communities centered around each category. The moderators can still callously and judiciously kick inadequate questions out of their domain. Why should I have to choose a site? Just let me ask my question and allow the community as a whole help direct it to the place where it belongs.

Crawling Back For More

Even with the problems I've discussed above, Stack Overflow is still a fantastic resource. When I have a programming question, I consistently find the best answers there -- even if I make my way there by searching Google. The fact of the matter is that with a large, intelligent, and active community, Stack Overflow is the best resource I know for all flavors of programming minutiae. I simply could not perform my job as efficiently without it. I guess I have to accept its flaws and admit that I still love it after all.

What are the best and worst features of Stack Overflow for you? Tell me your thoughts in the comments.


Joshua Ganes