23 August 2009

Who says house prices always go up?

Anyone hoping that the real-estate market will see a swift rebound, let alone any kind of recovery, should take note of what’s happened in previous bubbles. The bubble in Texas property that burst in the early ‘80s still hasn’t seen a return to pre-bubble values nearly 30 years later.

Now that the myth of ever-rising house prices has been shattered, it may be time to
embrace another inconvenient truth
: that prices can take decades to recover, at least when adjusted for inflation. A study in June by the Federal Housing Finance Agency, a regulator, pointed out that in parts of Texas house prices still languish some 30% below their 1982 peaks in real terms.

12 August 2009

FHA leads the way towards sub-prime meltdown 2.0

The anemic economic recovery that is being talked about so much is resting on very week foundations. An astonishing 90% of all new American mortgages are now guaranteed by the US government, in one way or another. It’s as if the private economy has vanished.

Worse, the extraordinary growth of Ginnie Mae (which handles FHA loan guarantees) shows that risky lending continues to thrive, but now it is being handled by the government rather than private mortgage brokers, and investment bankers, driving luxury cars. Instead of taking a tip from the private sector, which tightens it’s lending criteria during recessions, the government is lowering its standards as fast as it can. The rising default rates on new FHA insured loans bears this out.

Prepare yourself for sub-prime implosion 2.0 in the years ahead, when masses of FHA insured borrowers default, and the government is forced to bail-out Ginnie Mae.

Ginnie’s mission is to bundle, guarantee and then sell mortgages insured by the Federal Housing Administration, which is Uncle Sam’s home mortgage shop. Ginnie’s growth is a by-product of the FHA’s spectacular growth. The FHA now insures $560 billion of mortgages—quadruple the amount in 2006. Among the FHA, Ginnie, Fannie and Freddie, nearly nine of every 10 new mortgages in America now carry a federal taxpayer guarantee.

On June 18, HUD’s Inspector General issued a scathing report on the FHA’s lax insurance practices. It found that the FHA’s default rate has grown to 7%, which is about double the level considered safe and sound for lenders, and that 13% of these loans are delinquent by more than 30 days. The FHA’s reserve fund was found to have fallen in half, to 3% from 6.4% in 2007—meaning it now has a 33 to 1 leverage ratio, which is into Bear Stearns territory.

11 August 2009

Microsoft: the bold, yet timid, giant

Contrary to popular belief, Microsoft is an innovator. I myself participated in ground-breaking technological efforts in my years as a program manager and planner in the Windows division, and the projects I worked on are just the tip of the iceberg. Unfortunately, many of those innovations are in areas that never matter in the grand scheme of things, or are undermined by a simultaneous streak of timidity that courses through the organization. Microsoft is bold and timid, all at the same time.

It is not my intention to smear Microsoft. To paraphrase Tolstoy, every family has it’s issues, and Microsoft is no exception. My over-all experience at the company was a positive one. Much of the hostility, and consternation, towards Microsoft results from the simple fact that it is difficult for outsiders to comprehend the truly Herculean problems the company faces with creating and maintaining the most widely used software products in the world. So many people use Microsoft’s products, in so many ways, that almost any decision will wind up stepping on someone’s toes. Microsoft takes the commitment to prevent customer disruptions seriously (even if it doesn’t seem so from the outside), and moves heaven and earth in its efforts to do so.

That said, my purpose in this article is to outline some of the chronic problems I have seen in my tenure at the firm. I can count on my friends still at the company to do their job highlighting the company strengths (which are significant). It should also be noted that Microsoft is in no way unique with having engineering and corporate political problems. But the examples which follow will at least illustrate that the company is not immune from such issues.

By the way, I use projects I was directly involved with for most of my examples simply because I know so much about them, not to infer that these are the most egregious missteps Microsoft has made. I have great respect for the people I worked with on these projects, and know that everyone was always doing what they honestly felt was best for the company, and its customers.


Brute force innovation

The centralized control over software development, and world-class engineering processes, allows Microsoft to pursue massive undertakings that are simply impossible, or unheard of, in its competitors. Senior managers can (and do) create mandates which set whole armies of engineers in motion to ensure that new initiatives, or technologies, are thoroughly supported in every single line of code.

In Windows Vista I led an effort to drive adoption of IPv6 throughout the entire operating system. We created a massive effort to get every engineering team in the Windows division to test, and qualify, their code to work properly with IPv6. I am proud to say that the end result was the world’s first operating system that is nearly 100% IPv6 compatible (with only a couple unavoidable exceptions), and able to run on a network that has absolutely no IPv4 service.

This may be in the class of brute force innovation, but it’s innovation nonetheless. No other operating system has undergone the same degree of thoroughness, or dedication, to supporting IPv6. Linux hasn't done it, OS X hasn’t done it. No one has. Having gone through the experience of managing this massive transition effort, I can attest that supporting IPv6 is NOT as trivial as it seems. There are plenty of issues that were uncovered in tests with IPv6 only networks where we discovered strange (and unexpected) dependencies of IPv4.

On the other hand, as impressive as this achievement of reaching IPv6 purity is, one has to wonder why so much effort was expended in the first place? As far I know, no one is running IPv6 pure networks in anything other than a test lab. Sure, the US Department of Defense mandated that all products it purchased would have to pass tests in an IPv6-only environment, but they eventually rescinded that requirement, and most software firms (and operating systems) were able to play along just enough to provide sufficient IPv6 support to avoid being banned from government procurement lists. After all, there are really only a handful of services that most people really use, and the fact that some functions of Linux or OS X don’t fully support IPv6 has never raised an eyebrow.

It can certainly be said that this unparalleled extensive support of IPv6 has meant absolutely nothing to end-users, who won’t be using IPv6 networks at home (or at work) for many years.

To be fair, Microsoft is in somewhat of a unique situation in the marketplace which forces it’s hands into unnatural acts that competitors can slough off. The government isn’t a big customer of OS X, and Linux is free, which allows it to slip customer requirements without any trouble. The fact that Linux can be customized to such an extensive degree also means that it is possible for a vendor to provide a genuinely IPv6 pure version of Linux merely by never installing the components that aren’t IPv6 compatible.

Over-engineering

Another of Microsoft’s chief vices is to over-engineer things. Engineers are always looking for ways to overhaul old sub-systems with monster functionality, and breath-taking designs. This is akin to building a rocket powered scooter with fuel cells when all a kid wants is a skateboard to get across the street.

For my case study of this phenomena, I give you the Windows Filtering Platform (WFP) introduced in Vista. In conjunction with a completely rebuilt network stack (the value of which could also be debated), WFP represented the first time a comprehensive set of APIs were purpose built in Windows to allow software developers to manipulate, and control, packets. This was a laudable effort indeed. The network stack in Windows XP (which was based on the one in Windows 2000), was never designed to allow third parties to examine, manipulate, and control, packets. It was simply never conceived that software developers would have a need for low-level packet manipulation. This was before anyone considered that the security threats on the Internet would lead to the wide-spread adoption of host firewalls, all of which depend on low-level access to packets.

A few crude APIs for working with packets were tacked on top of the Windows XP stack as an after-thought, but software developers needing this kind of functionality were largely forced to adopt strange and unnatural strategies to achieve their goals. Firewall developers would often create fake network drivers, pretending to be network hardware. It was easy to make mistakes with this kind of hack. For example, how do you ensure that your fake network interface card driver properly sends the power-up and power-down commands the operating system expects, even though no network card exists? Consequently, host firewalls quickly became one of the primary causes of crashes in Windows XP. No wonder the engineers wanted to remedy this in Vista by finally creating some APIs that were designed from the ground up to allow low-level access to the network stack.

So far so good, but this is where things run off the rails. Instead of creating some simple to use APIs for working with packets, the Microsoft engineers (including myself), decided to take the bull by the horns and build an all-encompassing filtering engine that would keep track of, and manage, all the various requests for working with packets that might be sent from numerous applications. Instead of leaving users struggling in a world of chaos, with different security systems sending conflicting orders to the stack to drop, or allow, network traffic, the all-knowing Windows Filtering Platform would intelligently determine which of all the various instructions should win to give the user what they want (i.e. a pleasant, yet secure, experience).

The reality, however, is that no vendor of security software would ever want to leave the decisions of what to allow (and disallow) up to someone else. Well into the beta cycle of Vista it became glaringly apparent that few, if any, firewall vendors were willing to adopt WFP since it would mean giving up control over network activity to the operating system. Microsoft engineers reluctantly conceded the point and created a “veto” flag for filters, which would ensure that nothing could override it when the option was set.

In the end, WFP can legitimately be called a success. Virtually every firewall in existence now uses WFP, and crashes due to firewalls have dropped significantly. But the dirty secret is that most filters placed into WFP use the veto flag, which pretty much renders the beautifully engineered filtering platform useless.

If Microsoft had only set out to create some easier, comprehensive, APIs for packet manipulation, they could have achieved the same result with a significantly lower expenditure in resources. Further, the WFP APIs are needlessly complex for what they are actually being used for, thereby creating more expense for third party developers.

This is a classic example (amongst many) where Microsoft could have achieved the same goal by scaling down its ambitions and actually doing a little less innovation. There was nothing devious afoot with WFP. There was no grand scheme to undermine security, or put any partners out of business. Microsoft’s engineers were diligently trying to solve a real problem (i.e. the chaos of conflicting security systems messing with packets), but lost sight of the market realities.

Unfortunately, I saw similar situations occur in Windows 7, and have heard talk of other such ambitious schemes for future OS releases.

Fields of dreams

Luckily, the Windows Filtering Platform did not fall into the category of ignominy, reserved for ambitious technologies that never quite take flight. At least WFP has become widely adopted. By contrast, there are many significant technologies, and platforms, that are built into Microsoft’s products that are only ever used at the margins, and never see wide adoption. The details for why these failures occur are numerous, but the result is the same: a technology gets built that just collects dust but never gets removed, and still requires constant maintenance. This is what I call the “field of dreams” phenomena: a belief that if Microsoft creates something, then people will use it.

Picking on networking technologies again (merely because I know them so well, having worked in the network engineering team), Peer to Peer (P2P) networking and IPSec figure prominently here. Yes, you can point to examples of where some organizations have adopted IPSec to secure their network traffic, but the reality is that these significant engineering efforts have never really seen wide-spread usage.

P2P and IPSec are particularly good examples of how good innovations go awry. P2P is a whole suite of technologies, built on top of IPv6, which are designed to make peer-to-peer networking a tour de force in Vista, allowing 3rd party developers to easily incorporate peer-to-peer features. On paper this all sounds great, but constructing this on top of IPv6 (which no one uses), and the lack of key functionality make this unappealing for most developers to consider. The fact that only limited P2P functionality was introduced on Windows XP was also the kiss of death. Which software developer wants to adopt a technology that can only be used on the latest operating system?

IPSec has been suffering similar problems since Windows 2000. Over the years Microsoft has put a huge investment into creating innovations to make the management of fully authenticated, and encrypted, networks a reality. IPSec is an old IETF standard (supported by every OS in existence), but it is rarely used due to the sheer complexity of managing certificated, and creating policies. When IPSec is used, it is generally only for specific functions like VPN access. If you have a network with nothing but Windows Vista and Windows Server 2003 machines, it is quite possible to ensure that all network traffic is fully secured, at all times. The edge firewall is obsolete.

Despite many years of effort, wide use of end-to-end IPSec has yet to get off the ground. There are some very noteworthy case studies where organizations have fully deployed IPSec throughout their network (including Microsoft), but these are the exceptions. Poor cross-platform support has plagued Microsoft’s IPSec efforts from the very start. This is exacerbated by compatibility lags between each release of Windows. New functionality released in Vista (to overcome some IPSec deployment issues) was never brought back to Windows XP.

More importantly, Microsoft’s entire IPSec strategy ignored the needs and wishes of key players in the networking space (has anyone heard of Cisco?). Understandably, most network technology providers have always been luke-warm, at best, to Microsoft’s IPSec vision. What value is there for all manner of network security, and traffic management, products when all traffic becomes encrypted? When all traffic is gobbledygook, automated management tools can’t differentiate between traffic being used for Skype, accessing e-mail servers, or video games. So much for the ability of IT managers to give higher priority to particular types of traffic, or block others altogether.

Of course, Microsoft’s engineers have answers for most of these things (e.g. some 3rd parties build Linux plug-ins for Microsoft’s IPSec policy management system, and there are strategies for allowing IPSec to be deployed without encryption that allow traffic to be managed), but the end result is the same: IPSec is still only used on the margins. Most of these problems were foreseeable early on, but institutional momentum, and a grand vision that is too compelling to die, has kept Microsoft plugging away at it for a decade.

IPSec and P2P are yet more examples of Microsoft’s significant achievements in fruitless innovation. Who knows, maybe the Direct Access VPN feature in Windows 7 will be the technology that finally pulls IPSec out of obscurity, but I have my doubts. VPN usage is in terminal decline, as more and more corporate hosts get put directly on the Internet. Most organizations already allow access to e-mail directly from the Internet (which is the most common reason people need VPN access to corporate networks), and many other key services are being put directly on the Internet as well (e.g. CRM with Salesforce.com, etc). Direct Access also continues to suffer from the perpetual Windows curse of poor down-level and cross-platform support.

Little orphans

At least IPSec manages to sustain enough momentum to see continual improvements across multiple operating system releases. There are many other technologies that are not nearly so privileged.

There are numerous grand initiatives, with the best of intentions, which get built, but become orphans almost from the day they are released. Quite often these ambitious projects fail because the feature list is cut back so drastically to allow them to ship on schedule, that they lack critical capabilities that would make them popular. I know this is difficult for outsiders to understand, but many Microsoft engineering groups run on shoe-strings. I was a program manager for one technology in the networking group that had just one developer, when a small competitor with only $50 million in revenue had 10 engineers working on this. Is there any wonder that our spunky competition was so easily able to run circles around us, adding features, and capabilities, that we could only dream of?

These resourcing problems are particularly acute in Microsoft’s big products (like Windows and Office), since so much is bundled together. This makes it very difficult to know just how many resources should be devoted to any one area since there is no way to tell which features are most responsible for generating OS sales. When product teams make their cases to executives for resources, they are hard pressed to show how much doing a particular feature will contribute to increasing over-all revenue.

Any technology that doesn’t get widely adopted after its initial release is liable to find itself abandoned, and never improved upon later down the road. ClickOnce is an example of just such a technology. It was initially envisioned as a replacement for the Windows Installer (a.k.a. MSI), that would be free from the myriad headaches faced by application installation packages. However, ClickOnce functionality was pruned so much to allow it to ship on time, that it would only work for the most basic types of applications that had no need of using any kind of operating system extensions (e.g. the ability to automatically open an application with clicking on a designated file type).

It is far easier for developers to create ClickOnce installation packages than to do so with the Windows Installer, and ClickOnce applications are easily updated, and offer few nasty side effects for end users to worry about. Unfortunately, the limitations of ClickOnce render it unusable for most software.

The paltry uptake of ClickOnce, after its initial release, resulted in a virtual stop to the original vision of creating a grand new replacement for the Windows Installer. It was too difficult to make a compelling argument that more money should be spent on ClickOnce to actually realize the original intent since usage was so low. This is a circular problem. You will never get sufficient adoption without additional investment, but you can’t justify the investment without getting the adoption.

An unfortunate side-effect of all the prevarication over ClickOnce is that the Windows Installer itself (the veritable workhorse for installing the majority of software written for Windows) has been put on ice as well. Microsoft has made only the bare minimum of investments in both ClickOnce and the Windows Installer for many years. Yes, there are some minor improvements to the Windows Installer and ClickOnce in Windows 7, but “minor” is the operative word.

When Microsoft’s engineers decide that a given technology is too antiquated, and needs to be put to rest, they put it on life-support, even if they don’t have any viable alternatives in the immediate future (there is ALWAYS talk of building some amazing new technology that will replace all the old ones, most of which never sees the light of day). The Graphics Device Interface (GDI) is yet another example of this phenomena.

Sadly, this often means that key technologies (like the Windows Installer and GDI) can go through multiple OS releases, spanning a decade or more, with no real investments to speak of, when some very minor improvements could solve a lot of pain that developers (and users) are facing. Once the product teams reach a point where they feel that a given technology is antiquated, it is harder than pulling teeth from a rodent to get them to touch the code and make additional investments. Instead, energies will be spent spinning up proposals on the NEXT big technology that would replace everything.

Having seen enough of these grand schemes come and go over the years, I can honestly say that precious few of them ever amount to anything, and most wind up as instant orphans if they are lucky enough to get built in the first place.

Which brings me to a counter-intuitive conclusion: in many cases Microsoft would be far better served if it was less innovative, and really dug into the hard work of incremental improvements. I am not against taking risks, and making bold investments in new technologies. But if you know your new creation won’t be able to achieve a critical mass of functionality out of the gate, then you would be better off not even trying, and putting your resources into the tedious effort of improving what already exists.

NOTE: If you could spare about 10 minutes, I would love to get your input on a survey I am conducting about how the recession is impacting your IT spending.

05 August 2009

Anatomy of a job search: the meeting

Since I already hang out at the library doing a lot of my job search work I have taken the liberty to book one of the conference rooms at the Bellevue Lakehills library for Monday, August 10th, to talk about job search strategies for anyone who might want to come.

I will go over my job search strategy that I outlined in my recent blog post, and open things up for a discussion on what people think is good and what they feel could be improved. By sharing our thoughts we will all come away with new ideas, and inspiration, for our own efforts.

This meeting is open to anyone in the Seattle area, whether they are looking for a job, or just wanting to keep their options open. Just bring your lunch along and join the discussion.

If you do plan on attending, please e-mail me.

Meeting Details:

Date: Monday, August 10th
Time: 12:00pm to 1:00pm
Location: Lakehills
Library in Bellevue

You can get directions to the library here:
http://www.kcls.org/lakehills/

You can look at my blog post about my job search strategy here:
http://surkanstance.blogspot.com/2009/07/tales-from-job-search-trenches.html