Tuesday, December 10, 2013

Pulling The Plug - Software End-of-Life

Recently, the company I work for underwent a serious hardware overhaul -- upgrading a large swath of 32-bit servers to 64-bit processors and updating to the latest operating system and application versions. All in all, this upgrade went very smoothly. We planned out this transition and tested our software in our development environment before deploying to production. We ironed out a few issues that cropped up within the first few days. The problems ranged between minor incompatibilities and simple human error. Unfortunately, hiding in the shadows was a long-forgotten application that was still in use by a handful of clients.

This piece of software is the kind of application legends are made of. Written many years ago, it has been performing well enough to be forgotten about and ignored. The original developer and documentation are long since gone, and it required a great deal of asking around just to find out who used this software and why. Unfortunately, the software, which had been running smoothly for years, did not transition well onto the new servers.

One of our developers (not me) was tasked with getting this service up and running on the new architecture. After spending a few hours digging through the old source code he was able to get it compiling and passing a few basic test cases. We figured that we could move ahead with a production deployment and shift our focus to more pressing matters. We figured wrong.

The next day, the support calls came in once more. Our software worked well enough for a few hours, but something had gone wrong overnight. The application was fully locked up and unresponsive to any requests. A quick restart did the trick, but this was only a temporary band-aid fix and was not acceptable as a long-term solution. The same developer began trying to reproduce the behavior and fix the problem for good. This became the routine over the next few days.

Now, the part I haven't yet told you is that this system has long since been replaced with a newer, shinier, and more fully-featured solution. This new solution is still active with many customers and regular maintenance and enhancements. Only a handful of customers remained on the old system. The reason for staying was simple: don't fix what ain't broke. This system continued to serve their needs for all these years with no updates or known issues.

The way we saw things, we were left with three choices:
  • Install the old server hardware back into our production systems to support these customers
  • Assign developers to debug the old code and get it working on the new hardware
  • Offer these customers the choice between upgrading to the new system or lose this service (with a reasonable amount of time to decide and transition their systems)
The idea of reverting back to the old hardware was dismissed pretty quickly. The reasons we decided to upgrade our hardware were related to security, consistency, and future maintenance. We did not want to have to support 31 flavors of hardware in our production environment.

The remaining choice seemed to come down to simple math. Weigh the revenues against the costs and you'll find the correct decision. Is the math really so simple? How do you come up with a formula for this. I don't know the answer myself, but I'll take a shot at offering a few observations.

Let's start with some obvious factors. Software developers don't come cheap, and having developers spend a few days fighting with a bug can quickly run into the hundreds and even thousands of dollars. Estimate the required effort in time and multiply that by the developer's pay rate and come up with the cost of fixing this bug. Losing a paying customer is contrary to the goals of running a successful business. Continuing to support this solution means that the company can continue to feed off of these revenue streams. Keep in mind that these customers have been using these services with minimal maintenance for many years. Multiply the regular fees by the expected service time to calculate revenue. As with all things, estimates may not accurately reflect reality.

Simple revenues and resource costs don't really cover some of the more complicated factors. If cancelling the service means losing a customer, it may also mean losing future streams of revenue from that customer. When the customer needs to expand their services in the future, who will they turn to? Will they look to maintain a relationship with a company that they already know and trust? If so, terminating that relationship today may mean waving goodbye to new opportunities in the future. What about word of mouth? Damaging your company's reputation may have wide ranging effects in one of the most powerful and underestimated forms of marketing. What about future bug fixes and maintenance? Who is to say that the next round of updates won't expose more bugs in what becomes a money pit of maintenance work for only a handful of customers? What about the confusion caused by the need to support two similar but disparate systems?

To be honest, I am no expert on this matter. The choice was not mine to make and we decided to pull the plug on this legacy application. From a development perspective, I was pleased with this choice. Out with the old and in with the new, I say! I'd much rather have everyone up to date and on the same platform instead of trying to support a creaky old piece of legacy software. From the perspective of a businessman, I'm still not sure.

Have you or your company pulled the plug on legacy software? What were the circumstances? What triggered your decision? What factors did you consider before making the decision? Let me know in the comments.

Cheers,

Joshua Ganes

Thursday, August 08, 2013

Backwards Compatibility III - Planning For The Future

Dreaming Big


I like to daydream about my software projects from time to time. In my dreams, my software would become a worldwide sensation. It would be run on millions of computers operated by millions of users using various platforms and each having unique needs. My software would interact with thousands of third-party applications. Of course, my software would be flawless. Nonetheless, I would be forced to pander to the inept masses whose software is not worthy to interface with my magnificent technical achievements.

Down to Earth


In reality, most software that is written today will not see such widespread adoption. There are many software developers earning a reasonable living writing in-house software applications for use by a handful of employees in their own office. Others may have a wider impact, but will not go beyond a few thousand users in their niche market. While backwards compatibility is important for all developers to understand, it is rarely the highest priority for these types of software projects.

One example of software targeting the kind of compatibility required on a global scale is web servers. Web servers must be able to interact with an incredibly wide variety of devices and applications. They should also run on as many varied systems as possible to avoid limiting themselves to a single market. A web server needs to provide your grandmother's garage-sale Pentium II running Internet Explorer on Windows 95 with pictures of her family. It needs to provide content to yuppies browsing social sites on their iPhones during Saturday brunch. It needs to allow 14-year old Linux fanboys to post to their favorite hacker forums.

Identify Yourself


Perhaps the most straightforward way to plan for future compatibility is to add an identification protocol to each interaction. This lets applications on side of the interface know who they're dealing with and to compensate for known issues in their counterpart.

This is really not so dissimilar from interactions with people. You know not to mention your recent SUV purchase to your tree-hugging hippie friend named Rainbow. You probably avoid talking about your contempt for Justin Bieber to your teen-aged niece. In the same way, you can build up a list of known compatibility problems over time and compensate for them by adjusting your own software to avoid the problem or fix it in another way.

Identifying a unique name for each piece of software is not as simple as it may first seem. There are several key pieces of information required to uniquely identify each application: name, version, platform, and configuration.

The name distinguishes similar products with obviously different implementations and opportunity for discrepancies in behavior. For example, consider web browsers provided by different companies such as IE, Firefox, Chrome, etc.

The version is generally a sequential numbering system that can help distinguish older (buggy and limited) versions from newer (corrected and fully-featured) versions.

The platform describes the operating system and architecture on which a cross-platform product is running. This distinguishes the same product built for 32-bit Windows XP, 64-bit Windows 7, or 64-bit Ubuntu Linux.

Finally, the configuration may contain additional information about modifications, plug-ins, and settings that enable special features or trigger certain buggy behavior. This information can be very wide ranging, complex, and difficult to fully capture.

Once each application has verified the identity of its neighbor, it is able to accommodate known compatibility issues and limitations by changing its own behavior. Unfortunately, not all players are always working towards a common goal. These version identification techniques can be turned on their heads and used to limit or disable certain applications.

Expect New Features


With most software, there is a great incentive to add new bells and whistles and little incentive to trim the fat. New features are used to attract new customers and provide new capabilities. Old features may be missed if removed, and are generally simpler to retain than to completely remove.

A widely-used standard designed with this in mind is XML. This, of course, stands for Xtreme Markup Language (or eXtensible Markup Language if you're a real stickler).

XML is designed as a tree of nested tags with attributes and text. Well-designed XML libraries provide the ability to parse XML text into a data structure and to query for desired components within the tree. The beauty of this design is that adding new tags should not interfere with old logic. Older applications are completely free to ignore tags they do not understand and simply proceed as if nothing is different.

By using XML or similar extensible formats for data interchange, applications will be better prepared to accommodate any of the inevitable new features the future has in store.

Be Prepared


As the Boy Scout motto says, "be prepared." If you are designing a software product with the dreams of being compatible with a diverse range of other products and amenable to new features, you must plan for the future. A little planning up front will save you a great deal of grief later. Expect that incompatibilities will crop up and need to be addressed. Expect that new features beyond the original design will be required. Use an interface that is designed to help deal with these problems even before they arise.

Keep in mind that designing for the future can be challenging and time consuming. If you are writing an internal application with no plans for data exchange and targeting a single platform, you're probably best to stick with the "You Aren't Gonna Need It" (YAGNI) approach. Make sure you understand which type of application you are creating before you dive in too quickly and shoot yourself in the foot.

Cheers,

Joshua Ganes

Sunday, July 21, 2013

Bowling Is Not A Sport

Rules To Live By


Back in 2003-2004  I completed an internship with IHS Energy in Calgary, Alberta. Not to spark too much controversy, but as an Edmontonian at heart, I cannot advise anyone else to do the same. The company had a lot going for it, many of the people were great, but I couldn't get over the fact that I felt like I was a double agent living within the walls of the enemy base. For those who have not yet sworn allegiance to the Edmonton side of the battle of Alberta, it was a great experience.

While I was there, I was privileged enough to work with a bright and charismatic coworker named Demetrio (I hope I've spelled that correctly). Originally from the Bronx in New York, he was one of those people that everyone seems to like instantly. I admire people with that gift and wonder where that special quality comes from. I came to see him as somewhat of a mentor over my time in Calgary.

Among work-related talks and intense games of foosball, Demetrio would share nuggets of wisdom and humor. On one such occasion, he shared that he and a few of his college friends each came up with a list of three rules to live by. I can no longer remember his first two rules, but the third was so profound that it has stuck with me to this day: "bowling is not a sport".

Obviously, this rule was meant in jest. Yet, somehow, I couldn't get the idea out of my head. Meditating on this thought led me to ask the question, "How do I define a sport?" This question is tougher than it seems. Pause here for a moment and think about your answer it before proceeding.

(I'll wait here ...patiently)
Stay tuned and I will share my thoughts on this important topic.

A Spectrum Of Games


There are some games that the vast majority of people would define as a sport. Whether or not you enjoy it, very few would claim that football (European or American, take your pick) is not a sport. Balanced competitions between opposing teams with a great deal of strength, speed, agility, and strategy are almost universally accepted as sports.
At the other end of the spectrum are games that have no physical component to them, but may contain some amount of strategy. Most board games and card games fall within this category. Indeed, some of these games require great time and commitment to master. Chess is the definitive example of a complex challenge for great minds. Some people include these games in their definition of sport, but most will protest their lack of a physical component.
In the middle lay games and competitions whose validity as a sport are hotly contested. Is, as Demetrio would deny, bowling a sport? What about equestrian events? The jockey clearly has skill and athleticism, but the heavy lifting is done by the horse. What about car racing? Again, drivers have great skill and a long race can be a great test of endurance and concentration. What about individual contests like golf, where the player faces the course, not any opponent. What about judged competitions such as figure skating, dance, or gymnastics?

Formal Definitions


The definition of the word 'sport' as found on dictionary.reference.com contains a list of activities that I would not personally include using my definition of sport (more on that later). It even includes Demetrio's maligned bowling in its list of sports.

The International Olympic Committee recognizes sports as activities that are based in physical athleticism or physical dexterity. This guiding rule seems to be widely accepted by many.

My Definition


To find my own definition of sport, I examined games and competitions that I thought were not deserving of the title. My first target was games without a physical component. If the game can be adequately simulated by a computer, that is a bad sign. If an able-bodied man in his twenties has no advantage over a seventy-year-old retiree with arthritis, I cannot consider the game to be a sport. This rules out most board games and card games. Sorry poker, I love you, but the sports networks are just plain wrong.

I generally like the definition from the International Olympic Committee, but I feel it is still too broad. Prepare yourself for my first controversial criterion. True sports must include an element of strategy. Part of the joy of being a sports pundit is speculating on how I would do things differently if I were calling the shots. Nobody ever told a short-distance sprinter to hold back and lull his opponents into a false sense of security. No, the only strategy is to run as hard and fast as possible.

Another requirement is that the structure of the game must seek to put opponents on an even footing. By this, I mean that a game is designed to be symmetrical with both competitors trying to achieve equivalent goals. While games that break this structure are rare, this requirement becomes necessary for some of the more esoteric competitions out there. This eliminates most events from American Gladiators.

My final rule, which may also be controversial, requires a form of direct opposition. While one of the competitor tries to accomplish a physical task, the opponent must have a way to try to stop or impede his progress. A great deal of the glory of sport derives from a struggle of man vs. man. This rule puts the final nail in the coffin of bowling's claims to sporthood in my mind.

Applying The Tests


My criteria for a sport are quite clear and easy to apply. Even so, there are some events which sit on the border line. The first one that comes to my mind is curling. Let's take a closer look:

1. Curling requires weight control, precise aim, and physical sweeping. CHECK
2. Curling has many strategic components from guards, to freezes, to raises. CHECK
3. Each team has the same number of players and stones and uses the same playing surface and target. CHECK
4. The only "direct" opposition consists of sweeping stones out the back half of the house. check?

Is curling a sport? I'm still not sure.

Please Don't Hurt Me


Before I find myself impaled by javelins, being thrown across the room by weightlifters, or having my face used as a surface for ice dancers, I would like to express my respect for athletes of all forms. You train long and hard to achieve excellence and distinguish yourself in your field. You perform fantastic feats of of strength, skill, and elegance that are exciting and inspirational. Please take my writing with a grain of salt.

My definition of sport comes from the concepts that comes to my mind when I hear the word. I picture hockey, baseball, football, soccer, tennis, and wrestling. These other competitions are wonderful too, but they just seem different to me. For those who see things differently, more power to you.

What Do You Think?


I asked you to come up with your own definition of a sport first. Now that you've heard my theories, how well does it hold up? What do you disagree with? Is there something I've missed?

Demetrio, if you're reading this, I hope you've kept the faith. Thank you for all the good advice.

Cheers,

Joshua Ganes

Monday, July 08, 2013

Stop Whining And Get Over Yourself

A Disturbing Theme


There has been a recurring theme in programming blogs over the past decade or so that continues to get my ire up each time I encounter it. Some entries tiptoe around this theme without ever coming out and state it. Others state it so boldly that I am left with no choice but to roll my eyes or to fume at them and leave a nasty YouTube-style comment. Many authors with incredibly wide readership and influence have repeated this theme using different words.

Can you guess what the theme is?

Maybe a few quotes will help:


Shall I go on?

It seems that everyone with an opinion and access to the internet is writing a post bemoaning the sad state of affairs in the world of software. They complain that every single system is utterly flawed at the deepest level and is at risk of crashing immediately if we dare even look at it the wrong way.

A Dose Of Reality


Now, don't get me wrong. There is software out there that is crap. I've even written some of it myself. I feel, however, that saying that all software is crap, or even that most software is crap is far from the truth.

In the process of creating this post I have already interacted (both directly and indirectly) with many different software systems. I used my operating system, its user interface, and various hardware drivers to launch and control a web browser. I used my browser to search for and visit many different websites. Each of these sites is running at least a web server, and most have additional scripting and database systems that work with it. All of that data passes through multiple systems in transit over the internet. Guess what? I have not encountered a single noticeable error up to this poin...NO CARRIER

...

To my readers old enough to get the above reference, I apologize for subjecting you to such a terrible joke. To those younger readers who are scratching their puzzled heads, I apologize for subjecting you to such a terrible joke that you did not even understand.

The fact of the matter is that most of us use a wide variety of software each and every day. Occasionally we encounter a problem that interrupts us or prevents us from accomplishing the task at hand. It may be the blue screen of death disrupting our career-defining presentation to a high-profile client, or our browser freezing halfway through a video of adorable cats diving head first into slightly-too-small boxes. The rest of the time, our software is quietly plodding along in the background and doing its job so well that we barely recognize it's doing anything at all.

It's like that old caretaker at your company that most people were aware of, but nobody could quite tell you what he did. He worked after hours when everyone else had gone home for the night. You might have run into him that time you returned to collect something you had left in your office. Maybe he vacuumed the hallway outside your door that time you stayed late to finish your essential project before the deadline. When he retired, everyone suddenly noticed how so many subtle niceties around the office had changed . When a system is working well, we may not even realize how much we take it for granted until it is gone.

Most software that gets noticed is like the annoying guy at the office who all the managers praise, but nobody wants to work with. He feels the need to add his input on everything and boasts about his "expertise". Trying to integrate his work with the rest of the team sets everyone back as they struggle to correct all of the problems he's caused. He manages to assign the blame to the soft-spoken fellow who sits by himself in the lunch room. When he manages to take credit for the eventual success of the project and parlay it into a promotion, everyone else on the team breaths a grudging sigh of relief that at least they don't need to deal with that guy anymore. This is the bad kind of getting noticed.

The fact of the matter is that for every annoying software system that stands out in the wrong way, there are dozens of others that you barely even notice. You don't notice them because they perform their designated task in silence while you're busy watching groan-inducing videos of skateboarders landing crotch first on the railing instead of stylishly grinding down it.

Professional Pride


I don't know about everyone else, but when I step back from a challenging project that required lots of hard work, time, and energy, I take a certain amount of pride in it. I enjoy seeing efficiency improve as people use my system to accomplish tasks that used to take an order of magnitude longer and require tedious, manual effort. I enjoy creating new software realities out of mere ideas and possibilities.

In any system large enough to be proud of, there will inevitably be some problems. When the bug reports roll in, I don't just throw in the towel and bemoan the fact that I've created yet another crappy system. No, I roll up my sleeves and get to work. I fire up my IDE and get started debugging my code and developing a fix. Depending on the operating environment and the nature of the bug, I can have a fix ready and deployed within hours - delivering a working solution to an eager customer who is (hopefully) pleased with such prompt resolution for their issue.

When I watch users struggle to control my software, I don't just sit back and whine about how I've created yet another clunky user interface. I go back to the drawing board and come up with new ideas on how to make the user interface easier and more intuitive. I perform usability tests with ordinary folks who are trying my software for the first time. I iterate over new designs until I find a solution that is user friendly and simple to understand.

What They Really Mean


Perhaps I'm taking these people too literally. If their point is to acknowledge that most software has bugs and it's nearly impossible to create "perfect" software, then they've got it right. Developing software is challenging. Just when you think you've achieved a new level of competence, a new bug rears its ugly head. The bug mocks you as you sweat and furrow your brow, struggling to figure out what has gone wrong. When you finally figure it out, panting and exasperated, you smack your forehead for overlooking such an obvious flaw.

The good news is that with each boneheaded bug you create and fix, there's actually a chance you might be able to avoid making the same mistake again in the future. If you keep your head down and work hard for many years, you may just learn to create software that isn't a flaming ball of garbage. That said, even the best of us have days when our brains are just not on their game and we write a fancy version of while( 1 ) fork(); Don't lose faith. Keep going and continue striving to become a better developer. In time, you will gain a new level of competence and, at very least, you'll know enough not to let your guard down the next time you're feeling like a project was too easy.

Cheers,

Joshua Ganes

Friday, June 07, 2013

Is It Easier To Write Code Than To Read It?

The Premise


A recent Programmers Stack Exchange post reminded me of a seemingly widespread belief among programmers - writing code is easier than reading code. In his post, "Things You Should Never Do, Part I", about not writing software from scratch, Joel Spolsky writes, "It’s harder to read code than to write it". Most other posts I have found on this topic seem to reference Joel's article. They nod along in agreement as if to say that, once revealed, this premise is gospel truth and cannot be denied.

Horror Stories


Any programmer who's been around the block a few times can remember some godawful snippet of source code that warped their minds and nearly broke their will. Foreign (AKA not mine) source code may be difficult to understand for a variety of reasons. It may be oddly formatted, use poorly named variables and functions, have confusing organization, or seriously lack comments. Other than code full of "clever" tricks (why multiply by 8 when you can just use the bit shift operator?), I believe the most difficult code to read is simply at a level of complexity beyond the expectations of the reader.

Since everyone seems to recall a story about an undecipherable code base from their past, it becomes easy to latch onto Joel's premise. We know that if we aren't careful about our code choices, it can grow into an unholy mess full of badly-named functions full of side effects that run over multiple screens. It's easy to get code into this state and it's difficult to deal with maintenance once the damage has been inflicted. In this sense, Joel's premise is correct.

Defining Difficulty


If we ignore "bad" code and focus on "good" code, I believe that the picture changes. Writing good code is hard. Choosing meaningful and descriptive names for variables and functions is difficult. Structuring and documenting code so it's easy to understand and modify is a constant struggle. Self-documenting code contains all kinds of clues about the purpose of each component and demonstrates how each piece interacts with the other.

I think that part of the problem comes with the vague way we define difficulty. What makes one task hard and one task easy? There are several ways to measure difficulty. Some are more meaningful than others.

I think some of us choose to define difficulty based on our interest in performing a task. Clearly, watching TV is easier than washing the dishes. This measurement breaks down when a task is both challenging and interesting. I'd much rather climb a rugged mountain trail than take out the garbage, but scaling the mountain provides a much bigger challenge than dragging the trash to the corner. Programmers love writing code because of the intellectual challenge and the joy of creating something from nothing. Understanding the work of another coder does not generally hold the same appeal.

When it comes to business, a much better way to measure the difficulty of a task is in the cost in resources. In this case, we're talking about the time and effort of a programmer. It may not be fantastically interesting to decipher someone else's code, but given a well-written code base, understanding comes with perseverance. Rewriting the same piece of code (larger than a few hundred lines) to the same level of quality will require an order of magnitude in extra time.

Discipline


Here's the bad news: every field has unpleasant tasks. As a programmer, I prefer writing new code over maintaining old code. When someone proposes an unpleasant task, people tend to come up with all kinds of excuses to avoid it. This is as true for programmers as for anyone else. The excuses pile up until we talk our way out of doing the unpleasant task.

Unfortunately, as professional software developers, we're not being paid for pleasant. We're being paid for results. Great developers recognize this and exercise discipline. They buckle down and read the code carefully. They are able to understand it much more quickly than it could be rewritten. Once they understand the code, they make their changes and move on.

The good news is that if you follow this approach, you'll get things done quickly and be able to move on to new projects. Reading good code instead of rewriting it will result in a higher level of productivity. Your boss may just take notice and reward you for your efforts.

Cheers,

Joshua Ganes

Wednesday, April 10, 2013

Standard vs. Real-Time Unix Signals

Software Architecture


Imagine the following software architecture: a single parent process manages multiple child processes. The parent and child processes send Unix signals to manage and trigger activities between the two processes.

You've tested this basic functionality under reasonable load and all looks well. After initial deployment, everything is looking great. It's time to turn up the volume and bring more activity and clients on board.

After some time, a new bug report comes in saying that some requests are being lost in transit. Your logs show activity hitting the child process. You see the child signaling the parent to perform its next action. You see the parent handling a variety of signals from other processes, but the one from the relevant child process is lost to the mysterious void of /dev/null.

The Reliability of Standard Signals


It turns out that there are two classes of Unix signals: standard and real-time. Standard signals run from 1-32 and real-time signals typically run from 33-64 (Be sure to use SIGRTMIN and SIGRTMAX to avoid compatibility issues).

It also turns out that standard signals, while guaranteed to be delivered to a process by someone, are not guaranteed to be delivered by each process that sends them. This means that if multiple child processes send the same standard signal in a short period of time, the parent process may only receive one signal.

This is where the magic of real-time signals comes in. These signals can be queued and will be delivered for each process individually. Use sigqueue() to add signals to the queue of waiting signals. Be sure to check the return value for errors to prevent creating new bugs.

Cheers,

Joshua Ganes

Friday, March 15, 2013

When Advertising Crosses The Line

Exploration


After publishing my first few posts on this blog I became curious about all of the available options to control it. I ran through the list of tabs reading the options and thinking about which ones I might adjust or tweak. I got to the "Earnings" tab and suddenly perked up a bit. I know that successful bloggers with thousands of regular readers can earn a tidy profit by allowing advertising space on their blogs. Now, my readership is nowhere near those numbers. My earnings, if any, wouldn't add up to more than a few dollars a month. Still, in my ever-optimistic mind, I smiled at the thought of my blog achieving critical mass and earning me a steady stream of cold, hard cash.

Human Behavior


What is the number one extension or add-on for web browsers? Well, that partially depends on who you ask. Most top-10 lists will be sure to feature AdBlock somewhere, with many placing it in the topmost spot.

Do you fondly remember the early days of YouTube? Early on, you could view a video without the need to hover your mouse for five seconds before rapidly dismissing an advertisement. Admittedly, not all videos have this problem. Google (who owns YouTube) allows flexibility for channel owners to decide how invasive the ads should be.

Have you noticed that since PVRs have become commonplace, watching live TV seems, somehow, more painful than it used to be? Those commercial breaks we used to tolerate are now prolonged interruptions leading to impatience and frustration (unless you needed the bathroom break). Many people have become self-professed experts at fast-forwarding over the commercials and hitting play just in time for the show to return.

It seems, based on our behavior, that people don't like advertising much. We block it, we skip it, and we change our habits just to avoid it. Some of us have even begun changing our brains and started developing banner blindness. All of these facts start to make a person wonder how anyone can advocate advertising at all. Surely, those people must be trying to justify their greed for those shiny advertising dollars.

(...and don't call me Shirley)

The Other Side


Quick: What are the ingredients in a McDonald's Big Mac? Go!

(highlight to view the answer):
Two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame-seed bun.

Did you get it?

How about this one? Complete the following jingles:

Oh, I wish I were an Oscar Mayer wiener.
Plop, plop. Fizz, fizz. Oh, what a relief it is.

Each one of these marketing phrases was broadcast several years before I was born. How could I possibly be so familiar with them? It seems that some advertisements have a certain je ne sais quoi that allows them to endear themselves to us and become part of popular culture. They have become burned into the cultural psyche.

One of my personal favorites was Molson Canadian's "I Am Canadian" advertising campaign. This series of ads inspired a wave of national pride and fraternity for Canadians (this was before the Canadian Molson and US Coors companies merged). These ads connected with me personally and helped to fill me with pride in my country.

When I was a kid, say seven or eight, I used to like some of the commercials more than the television shows. I would play in our living room while my parents watched TV and I would perk up only to watch my favorite commercials. I loved to sing along with the catchy jingles. I loved the brightly-colored and animated commercials most of all.

I don't think you would find many people complaining that a restaurant puts a simple sign outside of its doors inviting customers inside. I certainly can't complain when they slap enticing pictures of their tasty dishes in the window. Some people love to window shop. They love just looking at the stylish displays in the store windows which beckon consumers to come inside and check out their other products and deals. Most people won't balk when an acquaintance offers them a business card if it suits the topic at hand. This sort of available enticement tends to be overlooked as a form of advertising.

It seems to me that it's not as simple as judging advertising as good or bad; right or wrong. Clearly, some ads bother people more than others. Some may even inspire them, or at least amuse and entertain them. Others manage to get their point across without ruffling too many feathers. How, then, can we know which are good and which are not? When does advertising cross the line between acceptable and questionable?

Crossing The Line


I think we can all agree that we don't want to be treated like a sucker. Nobody likes to play the fool. Advertising should not bend the truth, and most certainly should not lie to you outright. This sentiment is even built into our legal system. Perhaps most abhorrent of all is the ads that are misleading in their purpose. They entice you with an offer that sounds too good to be true. Those foolish enough to fall for the bait will be taken for a ride and dumped unceremoniously out on the curb, their wallets lighter for the experience. Don't forget web page ads that disguise themselves as content.

Have you ever met a salesman who won't take no for an answer? Did you enjoy the experience? When someone puts information out in plain sight for me to see, I can choose to engage with him or choose to ignore him (recommended). If that person then begins to hound me as if I just wasn't paying attention, it can really get my hackles up. Guess what, I saw you just fine the first time and chose to walk past you. How likely do you think you are to get anything good out of me now that I have been interrupted?

In much the same way, aggressive in-your-face advertising tends to receive very negative reactions. People begin to curse not just the products being sold, but the people who made them and their ads. Most people are content to ignore modest web advertising and continue browsing the site. When the advertising becomes more intrusive, the reaction becomes stronger. Who hasn't thrown at least a minor fit trying to find the browser tab with the obnoxious smiley face ad complete with sound effects?

My final guideline to avoid crossing the line is directed at service and content providers: don't double-dip. No, I'm not talking about the time George Costanza double-dipped his tortilla chip. I am talking about when providers try to accept money from both advertisers and consumers. This can be seen in various forms: a movie theater showing ads before the movie that patrons paid $12 a piece to see; magazines or web sites charging a large monthly subscription fee, but still cramming tons of ads between the content; DVD or Blu-ray discs that play unskippable ads when they start up. Why should consumers pay good money to watch your advertising? Who are you serving? Make up your minds already.

Getting It Right


It seems that the path to advertising with a clear conscience is fraught with pitfalls. With so many ways it can be done wrong, we will need some guidelines for getting it right. Where can we start?

If you're going to interrupt me, make it worth my time. As I mentioned earlier, some forms of advertising are more invasive than others. Some forms are a pure and unadulterated interruption. If you're going to interrupt me, please do whatever possible to appease me. Do something amusing, entertaining, inspiring, or compelling. Present your message if you must, but do not linger longer than the task requires. Your interruption will leave an emotional impact regardless - make sure it's positive and not one of frustration or anger.

Be forthcoming. Most advertising is trying to drum up business so that the advertiser can sell more widgets, or get more people to use their service. The bottom line is money. Most people understand that people need to make a living. They won't hold it against you if you make it clear that you're asking them to participate in a business transaction. It's not that complicated, really. I will give you X if you give me money. Don't try to hide the fact that you are asking to be paid.

My final guideline is to be accessible. We tend to accept advertisements that are intrusive but explain how the advertiser can meet our needs right now. While car ads at the movie theater may alienate moviegoers, ads suggesting that we all go to the lobby to get ourselves a treat are more acceptable. It's better to choose to advertise to consumers who are interested in your product or service now, than to those who have no need of it until the unforeseeable future.

I wish that I could say more on how to get it right. Frankly, it's not too difficult to find ads that avoid pissing people off. Inspiring them, on the other hand, is an incredible challenge. When everything comes together just right, a brief segment of airtime can transcend advertising and become its own cultural icon.

What Do You Think?


As I explained at the beginning of this post, monetizing my blog, given my current readership, would barely be worth my time. I'm pleased to find that the ads available to me from Google AdSense tend to be of the mild variety. They exist on the periphery of the blog and don't tend to interrupt the reader. They are even intended to be geared to the interests of the user and the contents of this blog.

Imagine with me, if you will, that you have a successful blog that could potentially rake in an additional $1000 every month. Would you enable ads? Would it be selling out? What would you be concerned about?

What are some things that advertisers do that cross the line for you? Let me know what you think in the comments.

Cheers,

Joshua Ganes

Tuesday, March 12, 2013

Backwards Compatibility II - Fixing Bugs

A Bug Report


Picture the following scenario:

You are in charge of maintenance for a system that has been deployed in production at many different locations for over a year. A new bug report comes in saying that one location is having difficulties with a specific request. This particular site has been running well for several months up to this point, but one new request has started causing issues. After a little investigation, you realize that the problem is due to a longstanding bug in your software that no one has encountered before. Your server application is sending an ampersand ('&') character that is getting misinterpreted by the client application. The client is simply incapable of handling this message and crashes every time.

What do you do?

Well, you can start by fixing the client application so that this is not an issue going forward for any new installations and for anybody who is capable of updating their application software. Sometimes, this is not good enough.

(By the way, why does software always go months without issue before receiving three bug reports in the same week?)

In certain circumstances the client application is outside of your control and may be impossible to update at all the sites for various reasons: the corporate policymakers have frozen the solution and will not sign off on any updates; the software is installed at a remote location and nobody has the necessary permissions or skills to perform an update; some hardware governing body requires a drawn-out review process for you to release your client application; or many other absurd reasons.

Now what do you do?

Fixing Your Own Bugs


The key to designing for backwards compatibility is to embrace the fact that you cannot change the past. Since you cannot change the software that already exists in the field, you need to update the software that you do control to compensate.

Before releasing your updated client application, add a new version indicator to the message format (if it doesn't exist already). Modify the server application to distinguish between the old client application (version 1) and the new client application (version 2). When the server is ready to send a version 1 response, it will replace any ampersands with a suitably-safe substitute. When it encounters a version 2 client, no substitution is required.

The following example C++ code snippet demonstrates this logic:

// Generate a new response object for the given request
Response* ServerApp::handleRequest( const Request& request ) {

    Response* response = new Response();

    ...

    string content = this->getResponseContent( request );

    // Remove ampersands for clients below version 2

    // See bug 19243
    if( request.getVersion() < 2 ) {

        content = this->replaceAmpersands( content );

    }

    response->setContent( content );


    ...

    return response;

}


This update allows the existing client applications to continue operating in a limited way (sans-ampersand) while allowing new clients to use the full features of the application.

Fixing Other People's Bugs


Not to spark a religious operating system debate, but I feel that I have to mention the fantastic efforts of Microsoft to maintain backwards compatibility when developing Windows 95. As described by Microsoft employee and blogger Raymond Chen, the Windows 95 development team went to great lengths to make sure old software still ran seamlessly on their new operating system. They knew that most businesses and even individual users have one or more deal-breaker application that must work when they agree to upgrade their software. If those essential applications failed, Windows 95 would not fly.

The engineers at Microsoft came up with a clever way to build backwards compatibility directly into Windows. They created Application Compatibility Shims - named, as I understand, after door shims used in construction. Just like a door shim allows for adjustments between the framing wall and the door frame, these software shims allow for adjustments between Windows and incompatible software applications. Microsoft wrote many custom shims to compensate for subtle and not-so-subtle bugs found when they tested old software running on their new operating system.

When Windows 95 finally launched, there were surprisingly few programs with serious compatibility issues. When issues were discovered, there was rarely a need to dig into the scary underbelly of the operating system. Instead, Microsoft could just configure an existing shim or whip up a new one to fix the bug. This design was so successful that very few people ever gave a second thought to upgrading Windows. This gave Microsoft market dominance of both the corporate and personal desktop - a position that other platforms are still struggling to pry away.

Something to Think About


As I explained earlier, the key to backwards compatibility is embracing the fact that you cannot always change what already exists. By cleverly designing new software to compensate for old mistakes, we can mitigate their issues and sometimes even pretend that the old mistakes never existed. All of this comes at a cost, but that's a topic that I will have to address in a future post.

Cheers,

Joshua Ganes


This post was the second part of a series on the topic of backwards compatibility. My next article on this topic: Backwards Compatibility III - Planning For The Future.

Sunday, March 10, 2013

The Insta-Set Clock - Named For Its Worst Feature

My wife and I had a minor problem. It's one of those problems we acknowledged, but let linger for a long time. Whenever we were in the living room and wanted to know the time, we couldn't find a convenient clock. Sure, there is the clock on our PVR and a clock on my computer and clocks built right into our phones. Each of these required a small inconvenience as the time was not immediately visible and available. Several times we discussed how we needed to buy a new wall clock.

One evening a few months ago, we were in London Drugs and happened to pass by a shelf full of wall clocks. Since we were not in a hurry, we paused to browse through them. The most important criteria for choosing a clock, we decided, were aesthetics and volume. i.e. We wanted to make sure it looked good and that it was very quiet. Since all the clocks were turned off, we could only choose based on appearance. We decided that we would return the clock if it was too loud.

Insta-Set Clock
Based on its clean and easy-to-read style, we chose the Accu-Time Insta-Set Clock. Reading the packaging, we learned that it has another interesting feature we had not considered. The packaging explained that when you turn the clock on, it will automatically set itself to the correct time. It is also smart enough to automatically adjust itself for daylight savings time. All the more reason to give it a try, I thought.

By the time I got around to trying out our new purchase it was about 9:45pm. Being a stereotypical male, I dove right in without reading the instructions. I set the clock to a few minutes before the current time and was excited to see it set itself to the correct time automatically. The second hand started flying around the face and the minute hand progressed towards the correct minute - so far, so good. The only problem was that the flying hands didn't stop when they reached the correct time. Instead, they kept flying forward giving me the sensation that I was in some kind of time-travelling B-movie scene.

After trying and failing again, I finally conceded that I would have to read the instruction manual. The instruction manual informed me (among other details) that I would have to move the clock hands back to the midnight position and let the clock set itself from there. The only problem with this was that during the "Insta-Set" process, the second hand goes fully around the face about once every 5 seconds. This means that setting the clock at about 10:00pm requires it to go around 10 x 60 = 600 times. All told, setting the clock correctly for the first time took nearly an hour. This is a new definition of 'instant' that I was not previously aware of.

After our initial trouble getting the clock up and running, I am pleased to say that we are now very happy with it. It runs nearly silently and has kept accurate time for a few months. As of this morning (March 10, 2013) I am also pleased to report that it correctly adjusted itself for daylight savings time. I would recommend buying this great little clock to anyone. Just don't get it if you need a clock that actually sets itself instantly.

Cheers,

Joshua Ganes

Thursday, March 07, 2013

Backwards Compatibility I - Presented In Color

Journey to the Past


Let's take a journey back in time - well before smart phones and buzzwords like "social media" (ugh). Before Nintendo (pick any version), before Atari 2600, and even before Pong. Turn back your mental clock to a time when people would listen to shows on the radio. It was in this period that owning a newfangled television set became something of a status symbol.

Television was a new phenomenon and a fascinating novelty. There were still many great enhancements left to discover. Nobody had yet figured out recorded video, instant replay, picture-in-picture, infomercials, reality TV, daytime talk shows, or re-runs. Perhaps most surprising of all was that viewers might like to see their shows in color instead of just black and white. Even while lacking all of these things that we take for granted today, television was clearly a hit. Consumers far and wide opened their wallets to buy one of these shiny entertainment machines.


How Televisions Work


Those readers who are familiar with how CRT televisions and monitors work will probably like to skip my simplified explanation in this section. Go ahead and we'll catch up with you in the next section. For those who are learning about this for the first time, pull up a comfortable chair and grab an icy-cold drink and read on. Readers who are truly in the know will find my description to be a slight oversimplification.

The earliest televisions used a technology called cathode ray tubes (CRT). This was the primary television technology for many years and you will still find them haunting the corners of basements, rec rooms, and man caves today. CRTs function very differently from the modern LCD, LED, plasma or other more exotic televisions available today.

In CRT screens, an electron gun fires a beam of electrons at the screen. A thin coating of phosphors are layered on the inside of the screen. When these phosphors are hit by the electrons, they glow for a brief time. A powerful electromagnet directs the stream of electrons in a zig-zag pattern across the screen and from top to bottom. The electron gun modulates the intensity of the beam from high to low. When the intensity is high, the phosphors on the screen will glow brightly (white). By reducing the intensity, the display can glow in shades of gray or even fade to black. A further, more detailed description is available here.

All of the above happens many times per second - 60 per second for North America and 50 per second for most of Europe. In order to determine how intense the electron beam should be, the television is tuned to a video signal. In original black-and-white televisions, this only indicated two things: 1. When to begin each pass. 2. The intensity of the electron beam at each moment. If your television is tuned in to a strong enough signal, this will result in a clear image on your screen.


The Trouble with Color



To those who skipped the previous section, welcome back. To those who chose to read on, I hope you learned something new.

Now, color was obviously a desirable feature. Inventors quickly began working toward this goal. Unfortunately, color added a whole new level of complication to the system. Achieving color television would require modifications to both the hardware used to display those colors and to the video signal that transmits them.

The very first commercial color broadcast used a newly-invented signal format. Televisions designed to receive these signals worked perfectly well with the new signal, but all of the old black-and-white sets already installed in viewers' living rooms were completely unable to view these broadcasts. Needless to say, this caused a serious chicken-and-egg problem. Consumers would need to buy new televisions to watch in color, but they couldn't watch black-and-white broadcasts. Broadcasters had to choose whether to broadcast each show in the sparsely-adopted color format, or in the well-established black-and-white format.

Backwards Compatibility


What if we could find a way that old black-and-white televisions could watch new color broadcasts (minus the color) while new color televisions could still receive old black-and-white transmissions? Broadcasters and consumers could stop worrying about the tired old question of black-and-white vs. color and move on to inventing and voting on singing competition shows respectively. As nice as it would be, surely this idea was just a naive pipe dream.

Amazingly enough, Georges Valensi found a clever way to make this dream a reality. He designed a new video signal that added additional information to the old format. The first part of the video signal indicated the overall light intensity (just like before). The second, additional, part of the video signal indicated the relative intensity of the red, green, and blue color components. This meant that the old black-and-white televisions could safely ignore the second part of the signal and still display a reasonable image. New color televisions were able to perform additional processing on the signal to fill in the corresponding colors.

This example of the color television signal format is just one of many thousands of examples of backwards compatibility. When implemented correctly, backwards compatibility allows old, existing systems to continue functioning while adding new, desirable features and capabilities to newly-designed systems. Sometimes protocols can be designed with future flexibility in mind. Other times, clever thinking is required to find a compatible new solution. In any case, backwards compatibility is a fabulous way to create something new without breaking a system that is already in use and working well.

Cheers,

Joshua Ganes

This post was an introduction to the topic of backwards compatibility. My next article on this topic: Backwards Compatibility II - Fixing Bugs.

Thursday, February 28, 2013

Signal Handlers Plus Locking Equals Evil Squared

Threads Are Evil


I'm not going to go into too much detail on the reasons why threads are evil. This topic has already been covered by more experienced authors than myself hereherehere, and perhaps most importantly here. I think I had better clarify that or I'm going to get nasty letters with inappropriate references to my mother. Threads are evil, but sometimes a necessary evil.

The basic problem with threading is one of race conditions on shared resources and non-atomic operations. If thread A begins an operation on resource R, and a context switch occurs while the operation is in progress, thread B can come in and step on its toes and corrupt resource R resulting in undefined and (usually) undesirable behavior.

There are techniques for preventing these kind of problems, which generally involve some kind of carefully-synchronized locking mechanism. Used incorrectly, these locks can lead to freezing deadlocks as each thread is waiting for the other to complete its work. Employed appropriately, these techniques can enable concurrent processing without interfering with the stability of the program.

A Story From The Real World


Join me in my sorrow as I relive the pain of recent events or, more likely, snicker and laugh at me as I run around in a panic trying to put out fires.

At my work, we run a live service application that has been deployed in production for many months. I won't go into too many details for privacy reasons, but suffice it to say that hundreds of devices connect to this system daily and expect it to work around the clock. It has had, as with all non-trivial software, its share of bugs. Most of them have been minor and reasonably straightforward to reproduce and fix. This one added several new twists.

This service, which had been running for some time now, suddenly stopped frozen and unresponsive. Anything short of the all-powerful kill -9 call was ineffective in terminating the deadlocked process. Fortunately, we have monitoring software in place that detects periods of inactivity and alerts us via email.

Browsing the server logs showed nothing out of the ordinary. The service was under reasonable load, but certainly not pushing the limits of the hardware. A few basic attempts to reproduce the issue came up fruitless. My next course of action was to create a new, massively active stress test. Even running this stress test failed to reproduce the issue. In the meantime, we saw the problem in the production a second time. It was clearly not just somebody's imagination.

The breakthrough moment came when I enhanced the stress test application to randomly break connections and ignore messages. After running the stress test for a few hours, I finally found a frozen process in the test environment. With a little further analysis, I quickly realized that the problem was in the service's signal-handling code.

The following C++ code snippet is a rough simplification of the code being used to process signals:

static std::queue< int > signalQueue;

static void enqueueSignal( int sigNum ) {

    signalQueue.push( sigNum );
    signal( sigNum );

}

int main( int argc, char** argv ) {

    signal( SIGTERM, enqueueSignal );
    signal( SIGCHLD, enqueueSignal );
    ...

    while( true ) {

        if( !signalQueue.empty() ) {

            int sigNum = signalQueue.front();
            processSignal( sigNum ); // Defined elsewhere
            signalQueue.pop();
        }

        ...

    }

}


Deep within the dark and disturbing mysteries of the standard template library lies a secret terror. That terror is designed to deal with the issues of threaded race conditions as I described above. Unfortunately, it can also occasionally ensnare a program that uses STL containers inside of a signal handler.

Signal handlers are special, as they are not run on a separate thread, but still can interrupt the main thread at any point in its processing. They can also interrupt themselves only to be popped off the stack at a later time. This can lead to deadlocks where the signal handler is waiting on a lock that will never be released by the code below it. This is clearly a case of evil squared.

A Way Out


The solution to this problem: use the sig_atomic_t variable type. This special variable type is intended especially for use inside of a signal handler. It is intended to guarantee an atomic global resource that can safely be updated by a signal handler.

Make sig_atomic_t your new best friend when writing C++ signal handler functions. This unfortunately means that you cannot implement particularly complex logic within a signal handler function. My recommendation is to implement a new signal queue as an array of sig_atomic_t values with a cycling index and size counter. These values can then be accessed by the main thread once the signal handler returns. From there you can do whatever complex processing may be necessary for your system.

Good luck with your ventures into the dangerous world of concurrency and signal processing.

Cheers,

Joshua Ganes

Saturday, February 23, 2013

Issues With Java Updater or Where The Heck Did This Ask.com Toolbar Come From And How Do I Get Rid Of It?

Act I - Morning

I woke up, kissed my wife, and threw on my housecoat on my way out of the bedroom. I grabbed a box of Cherrios, a bowl, a spoon, and a jug of milk and balanced them carefully as I walked toward my computer chair. I jiggled the mouse a bit and waited for the computer to wake up. I immediately opened a handful of browser tabs to find some early-morning entertainment and began sifting through the titles looking for amusement.

It was at about this point when I noticed a little flashing logo on my task bar. Oracle had come out with yet another Java update and needed my permission to go ahead and install it. Asking for permission is good, right? If Java just replaced itself inside of a critical system, who knows what could happen? We could wake up and find that all of Earth's computers had ground to a halt driving the human race into anarchy and chaos with cruel, makeshift bludgeoning weapons and the weak and helpless crying in the streets. More likely, the big boss might miss an important message about an important meeting with an important fellow and miss out on an opportunity to access all of his important money.

I understand why the Java updater errs on the side of caution here. My real problem stems from trying to find a way to change its default behavior. I have never (knowingly, anyhow) had an application that I use at home break because of a Java update. I've tried to (and perhaps some reader can help enlighten me) override this and find some configuration that allows me to install these updates automatically. So far, my search has been fruitless.

Rather than performing updates automatically, I do them mindlessly. I repeat the mantra "Shut up, Java. Yes, Java" over and over in my head with a disgruntled disposition while hammering the Next button as fast as I can. After hitting the rightmost button on the update wizard a few times, the flashing icon goes away and Java is appeased once again. This technique served me well for several months and I used it once again on this occasion.

I continued my breakfast and wake-up routine. I finished my bowl of cereal and read through a few amusing articles online until my start-of-shower deadline beckoned me on. I hurried to get myself ready for the day and rushed out the door just on time to make my way to work for the day.

Act II - Evening

My family and I had just finished dinner. I delivered the dirty dishes to the sink and waved at them expecting that they would magically clean themselves. I have yet to master this technique. My wife was playing on the floor with our little girl. I can't quite recall what I was looking for, but I decided that I wanted to search for someting online.

I opened my browser and typed my query into the Google Chrome omnibox. I paused for a moment because something didn't look quite right. Why did Google just give me an Ask.com search result as my top link? Figuring I must have typed something strange, I rephrased my query and typed again. "Who changed my default search engine?", I asked to the room in general. My wife gave me a shrug of uncertainty.

I looked more closely and realized that there was some kind of new toolbar that I hadn't seen before. I couldn't remember installing anything of dubious origin recently. I was both miffed and perplexed. I thought, like with other malware, that I would have to go on a long hunt to find the culprit and eradicate it. Thankfully, it turns out that removing it wasn't too arduous a process. NB: This admission does not excuse anyone of wrong doing.

I found the Ask.com toolbar under the Windows Control Panel Uninstall option and purged it with great vengeance. I reset my default search engine back to Google and everything was right with the world. Everything except the question that kept nagging at me - where did it come from? I finished my search and settled in for a cozy evening with my family.

Act III - Revelation

Our story continues about a week or so later. Another day, another web search... The Ask.com search and toolbar had come back. "Clever girl..."

This was surely the work of a genius criminal mind. Lull them into a false sense of security and then strike when they least expect. Play dead when they try to fight back.

This time I was determined to rid myself of the problem once and for all. A few quick searches led me to an unexpected and horrific conclusion. This was not from some random malware, but from Java. Good ol' Java - my buddy, my pal - running slowly and pretending to be as cross-platform as ever. My first university course was in Java. I was even the TA for a whole lab section on Java. This time, it was personal.

While it is technically not required, Java runs on so many personal computers that it might as well be considered an essential component. The fact that this story played out this way for me means that it probably played out similarly for thousands of others, if not more.

My problem isn't with Java bundling the Ask.com toolbar, though that's questionable enough. My problem is with how hard they seem to want to shove it down our throats.

For one, the optional toolbar installation is automatically checked by default. This means that people who aren't paying attention (me) or people who don't know better will be force fed a toolbar that nobody actually wants. This is particularly grievous for novice users who can't easily figure out how to undo the damage.

My second objection is that the user's preference not even saved for the next go around. This means that my old method of mindlessly clicking through the installation wizard is no longer safe. I now have to pay attention and deselect this option once every few weeks (probably more like twice or three times as I use a few different computers). This totally exacerbates the problem I described earlier of not being able to automate the update process. I leave a back-of-the-envelope calculation on the amount of time wasted worldwide as an exercise for the reader.

Oracle is abusing Java's absurdly advantageous position as an "essential component" to cash in on some marketing money from Ask.com. This soulless cash grab inconveniences and disrespects thousands of users each and every time they are forced to disable this option.

Unfortunately, I don't believe that Oracle or Ask.com will be punished adequately for this abuse of power. I sincerely hope that I am wrong.

Cheers,

Joshua Ganes