Nov 16, 2010

The modern programmer

Abstractions. Specialization. Outsourcing.

The further I get into my career and the more skill and knowledge I collect, the more I begin to see patterns in life as a whole. It may sound odd to say that being a .NET developer and learning C# has taught me a lot about how societies and civilizations work, but it has.

As computers get more and more complicated, we programmers have to be increasingly specialized, which has the alternate effect of fragmenting and compartmentalizing us. Once upon a time, there was just the “computer guy.” Now you need a “network guy”, a “systems guy”, a “software guy”, a “database guy”, and so on. Even just focusing on my area of software, you need dudes who know kernel kung-fu, shell scripters, user experience gurus, graphic designers, architecture analysts, web guys, Windows guys, Apple guys, various Phone/Mobile guys, and so on. What happened to just being a “programmer”?

As we programmers find ourselves needing to specialize more and more, the fringes of our knowledge become ever more clear and can be quite scary. We realize that should anything go wrong we only have really a narrow slice of confidence area. Beyond those near borders lies hacks, kludges, and Googling for hours. Or worse: phone calls, support tickets, and temporary/permanent workarounds.

That fear of the ever closing darkness of unknowns is just an illusion made clear by our ever growing light of skill. The better we get at certain things, the more that new knowledge casts ever darker shadows over what we now know we don’t know, or as Rummy famously put it, “known unknowns”. It is simply that we rarely ever think about or acknowledge unknown unknowns. Hey, we’re busy people, right? Not that there’s anything wrong with that.

To fresh meat in the field, such as myself and the interns I often work over, this encroachment of newly known unknowns can be intimidating, frustrating, and even depressing. The reality comes crashing down on recent CS graduates that despite four years there is way way more that they haven’t got a clue about and it’s changing every day. That last part is particularly debilitating as it suggests that not only are we way behind on things but that we’ll probably never catch up.

How do you deal with that? Not just programmers, but anyone? Because this isn’t something unique to programming in general. Any field or walk of life introduces this sometime or other.

I once listened to Joel Spolsky (most likely in one of the many great StackOverflow podcasts) talk about how programmers today are “API developers” whereas just a few years back they were more like “language developers”. His basic point, so far as I remember it, was that in the past you sat down with the book (probably from IBM) and learned a programming language. Often they were specific to a particular model or series of machines. They were simple enough, though, that you could read it through and start banging out code, only using it onward as reference. Languages, as the machines they commanded, were simple, concise, and didn’t require an unrealistic amount of memorization.

Today, you learn APIs, whether it is .NET’s hundreds of namespaces, classes, and methods or Java’s or Apple’s or whatever. This is in addition to learning a requisite language as well (C#, Java, Objective-C, etc.) and already you can see why MSDN is every .NET developer’s #2 hit (#3 is now StackOverflow; #1 is of course Google, but it is usually just leading you to either #2 or #3 or some guy’s blog like this one). Impossible, save for the idiot savants and autistic among us, to even begin to commit entirely to memory, the inner workings of which are also often veiled. It wasn’t until just a few years ago that Microsoft let us peak at framework implementation code while debugging. And this is still just within the .NET sphere. The CLR exists within an incredibly sophisticated OS, which itself is leveraging thousands of subsystems, firmwares, and network resources. Each one has years of history, tons of code, and gobs of potential documentation.

So when you’re writing a WPF app and you want to know why double-clicks aren’t firing on your ListBoxItems… well, where to begin? You might get lucky and it’s just that you forgot you were setting e.Handled = true; in some earlier event. Or maybe it’s much deeper. Is it the event handler itself? Is it the ListBoxItem control? Is there a problem in the event order stack (bubbling/tunneling)? What about the XAML parser? The message pump on the window? The mouse driver? The mouse buttons themselves? The USB hub? Where down the line could it have gone wrong? (Let’s never forget to rule out PEBKAC!)

What you realize is as things get more complicated and we continue to pile on abstractions and frameworks, we keep increasing the number of places where things can go wrong. When automobiles were just a few axles, wheels, and pistons, there were only a few places to look when the thing didn’t start. With modern cars there are so many belts, circuits, and doodads that you need an official diagnostics machine and standard output port just to debug. And mechanics are basically debugging cars these days and I don’t mean off your radiator grill.

System time used to be just a chip on the motherboard, often represented with a simple numeric value. Now we have many complicated systems interworking. To use the Microsoft example again, since it is most familiar to me, we have a southbridge that keeps track of the real time clock; an OS that handles daylight saving, time zones, leap years, locales, calendar types, and synchronizing with remote time servers; and .NET that provides classes and methods for converting, manipulating, calculating, and representing all kinds of time-related stuff. Any one of which’s malfunction/failure might explain why your app’s error log timestamps are off.

“Too many moving parts!” you say?

But while the kneejerk is to cry foul at the complexity and the ever-growing potential points of failure, I say embrace it! As societies this is precisely what we have done, and would you really argue that we are worse for it?

We live in an age of ever-growing complexity, specialization, and outsourcing (and no, I don’t mean of the Indian variety specifically). We’ve gone from all being either hunters or gatherers to some being leaders, some medicine men and most hunters, to thousands of career fields and specialties. Everything we take for granted today is made possible by outsourcing 99% of our lives so that we can focus on something else very closely. Could you put together your own car? Refine the gas to run it? The oil to lubricate it? Cultivate and harvest the rubber to make the tires? And be continually improving all these processes as well as coming up with new ones? Of course not. In fact, no single one of us can or specifically desires to.

I think Joel, being of the generation of programmers before mine, was being a bit pejorative when he called us “API developers”. He remembered the halcyon days and couldn’t help but feel disgust at the change, that we had abandoned the noble craft in pursuit of these abstract APIs, frameworks, stacks, and such when we should be working with numbers, binary, CPU opcodes, and registers… you know, the real manly kind of programming. (To all you prospective or current freshmen CS kids: just wait until your inevitable Assembler class. It’ll put hair on your chest.)

But just as the specialization that we have embraced in society to bring us so many complex yet wonderful things has been a good thing (lifespans are longer, people are happier, more free, and wealthier overall), doing the same in the world of programming can and has done the same. You older programmers may scoff that I’ll never have to write a date parser that accounts for leap year but I declare it proudly. “Kids today don’t know how to code.” But sir, that’s just one less thing for me to get wrong… you know, since I’m not a date and time guru. I’ll leave that up to the Microsoft engineers, or anyone else writing date/time utilities

In the end, I’ll have far more pieces at my disposal with far more features and safeties than I could ever write on my own. I like not having to write a lot of protocol-specific code just to open a connection via ODBC to a database, issue a command, and read back a response in a data structure I can understand and program against plus account for timeouts, network latency, different versions, you name it. I’d never get any apps done if I had to be lord of everything, which is what a lot of old-school CLI programmers grew up with and got accustomed to.

The monolithic perspective of the console app is that it is the only show on the stage. This top-down view of program organization feeds the power-hungry, those that need that feeling of control, that the program is only doing exactly what they wrote it to do. This affords lots of control over how everything is implemented. But it comes at a high cost. Consistency is at the mercy of the developer, both in protocols used, design of the application, program flow, user interaction, anything really. And when you control the whole show you have to control the whole show. Things like the Standard Library for C++ grew out of a need to stop reinventing the wheel every time, to have a library of reliable common utilities and structures. The APIs and frameworks of today are merely the extension of this same thinking and need.

When you sit down today to write a program, there are a myriad of frameworks to pick from, and choice can be daunting. Picking one feels like closing the doors on the others. Will you ever have time to try them? If you do, won’t you just be a master-of-none? But I’ll take these psychological traumas and conundrums over the alternative any day.

Omake bonus: Programmers used to wear suits and ties to work. How quaint!

No comments:

Post a Comment