Cable Companies Need To Understand Changes...or they'll go the way of Over The Air Broadcasters...
Stolen from David Isenberg's SMART Newsletter #86, out today -
=========================
Quote of Note: Greg Blonder
One SMART Person pointed out to me that he'd need a subscription to WSJ-interactive to read Greg Blonder's Barron's article, in which he says that technology investments are in the zone when they promise 2x to 4x improvement.Ê Myself, I'm happy to pay.Ê But the right-now solution is fair use.Ê So here are some key excerpts from Blonder's article. -- David I
"These are turbulent times for the cable industry. Long undercapitalized and led by free-wheeling, iconoclastic empire builders, the cable industry is struggling with the end of an era of significant new subscriber growth.
"Many investors view cable as a mature business with the potential for squeezing out only marginal returns. Cable, however, has moved far more aggressively onto the Internet than the big phone companies. . .
"In fact, most cable companies are dead on their feet. Their grim fate will become obvious in five years and that fate will be well on its way to reality in 10 years. Only the cable operators that perceive the trends early enough and act in time have an opportunity to survive and achieve some success. There is probably even room for one great company to emerge out of the cable industry -- as the foremost champion of wireless broadband Ethernet to the home.
"To understand what is happening, observe the trajectories of two long-term trends [see chart at http://tinyurl.com/7chh]. Around 2004, . . . sending TV-quality images directly to the home over the Internet will be simple, and a historic business barrier will fall.
"A few content providers then will say, "Cable systems are a thing of the past. You consumers ought to dial in and get your videos and your TV shows directly from us." Others will follow more slowly.
"A decade after the lines intersect, virtually all content providers will have found many ways around the less-than-beloved cable middleman.Ê
"Many businesses facing such long-term trends retreat to a state of self-denial. In the early 1990s, AT&T management argued internally that the steady upward curve of Internet usage would somehow collapse. The idea that it might actually overshadow traditional telephone service, was simply unthinkable. But the trend could not be stopped -- or even slowed -- by wishful thinking and clever marketing. One by one, the props that held up the long-distance business collapsed.Ê
"In video compression and transmission speeds to the home, we are dealing with classic cost-improvement curves. Such curves represent the interactions between market and scientific disciplines, and they are very predictable.Ê Experience shows that if new technologies promise improvements by a factor more than four, they tend not to get funded because they are seen as too risky.
"And if they promise less than a factor of two, they tend not to get funded because they offer too little economic benefit. So each new generation of products -- be they jet engines, software or chicken broilers -- brings about a measured, highly predictable benefit.
"Thus the curves determining cable's future stretch out for all to see. And as the modems cable companies are so energetically promoting grow ever faster and ever cheaper, the cable companies will find that they have unintentionally cut themselves out of the content delivery business.Ê Customers will simply bypass the cable operator's content -- and its layers of fees -- and go straight to the source.
"What cable companies must do is become transport companies. A smart cable company would stop pouring money into projects that conflict with the new reality and have repeatedly failed to gain traction in past trials.
"The hundreds of millions of dollars being invested in set-top-box entertainment hubs would flow elsewhere. In the coming era of direct contact between content- providers and consumers, the set-top box will no longer be required as a mediator for information. No amount of additional set-top box features can change that fact. Instead, the open standards of the Internet will dominate -- and a range of network-attached devices will be made and sold direct to the consumer by consumer- electronics companies.
"A smart cable company would focus all available investment dollars on finding new ways to becoming a better pipe company -- facilitating the streaming of video through the cable modem, for example, so that true TV-quality on the computer screen becomes dependable.
"The present cost structure of the cable industry remains way out of line for such a model. Yet as cost-efficient pipe providers, cable companies would be well-positioned to fight off the local phone companies, who will almost certainly continue to suffer from lethargy and capital inefficiency in defending their voice services.
"Even with a full-blown crisis for cable years away, it is clear that only the most efficient users of capital will win . . ."
Excerpted from "Creative Destruction," by Greg Blonder, Barron's, November 11, 2002, http://tinyurl.com/6hg0. (WSJ subscription required....) posted by James
1:34 PM
NEC @ Shirky.com is a mailing list about Networks, Economics, and Culture. Clay Shirley regularly presents some interesting insights into networking. Go look and subscribe (free) if you like it.
Telecom this time, an essay on two patterns of network deployment -- perma-net and nearly-net. Permanet is the visionary approach -- big ideas, big engineering, high quality, high cost. Nearlynet, by comparison, is a baling wire and twine, What works today? and built in pieces kind of network.
Though everyone likes the idea of permanet, the article concerns the ways nearlynet is better aligned with the technological, economic, and social forces that help networks actually get built. This pattern is particularly important as wireless data services are being built today around these two visions -- 3G as permanet, and Wifi as nearlynet.
- clay
========
For most of the past year, on many US airlines, those phones inserted into the middle seat have borne a label reading "Service Disconnected." Those labels tell a simple story -- people don't like to make $40 phone calls. They tell a more complicated one as well, about the economics of connectivity and about two competing visions for access to our various networks. One of these visions is the one everyone wants -- ubiquitous and convenient -- and the other vision is the one we get -- spotty and cobbled together.
Call the first network "perma-net," a world where connectivity is like air, where anyone can send or receive data anytime anywhere. Call the second network "nearly-net", an archipelago of connectivity in an ocean of disconnection. Everyone wants permanet -- the providers want to provide it, the customers want to use it, and every few years, someone announces that they are going to build some version of it. The lesson of in-flight phones is that nearlynet is better aligned with the technological, economic, and social forces that help networks actually get built. The most illustrative failure of permanet is the airphone. The most spectacular was Iridium. The most expensive will be 3G.
- "I'm (Not) Calling From 35,000 Feet"
The airphone business model was obvious -- the business traveler needs to stay in contact with the home office, with the next meeting, with the potential customer. When 5 hours of the day disappears on a flight, value is lost, and business customers, the airlines reasoned, would pay a premium to recapture that value.
The airlines knew, of course, that the required investment would make in-flight calls expensive at first, but they had two forces on their side. The first was a captive audience -- when a plane was in the air, they had a monopoly on communication with the outside world. The second was that, as use increased, they would pay off the initial investment, and could start lowering the cost of making a call, further increasing use.
What they hadn't factored in was the zone of connectivity between the runway and the gate, where potential airphone users were physically captive, but where their cell phones still worked. The time spent between the gate and the runway can account for a fifth of even long domestic flights, and since that is when flight delays tend to appear, it is a disproportionately valuable time in which to make calls.
This was their first miscalculation. The other was that they didn't know that competitive pressures in the cell phone market would drive the price of cellular service down so fast that the airphone would become _more expensive, in relative terms, after it launched.
The negative feedback loop created by this pair of miscalculations marginalized the airphone business. Since price displaces usage, every increase in the availability on cell phones or reduction in the cost of a cellular call meant that some potential users of the airphone would opt out. As users opted out, the projected revenues shrank. This in turn postponed the date at which the original investment in the airphone system could be paid back. The delay in paying back the investment delayed the date at which the cost of a call could be reduced, making the airphone an even less attractive offer as the number of cell phones increased and prices shrank still further.
- 66 Tears
This is the general pattern of the defeat of permanet by nearlynet. In the context of any given system, permanet is the pattern that makes communication ubiquitous. For a plane ride, the airphone is permanet, always available but always expensive, while the cell phone is nearlynet, only intermittently connected but cheap and under the user's control.
The characteristics of the permanet scenario -- big upfront investment by few enough companies that they get something like monopoly pricing power -- is usually justified by the assumption that users will accept nothing less than total connectivity, and will pay a significant premium to get it. This may be true in scenarios where there is no alternative, but in scenarios where users can displace even some use from high- to low-priced communications tools, they will.
This marginal displacement matters because a permanet network doesn't have to be unused to fail. It simply has to be underused enough to be unprofitable. Builders of large networks typically overestimate the degree to which high cost deflects use, and underestimate the number of alternatives users have in the ways they communicate. And in the really long haul, the inability to pay off the initial investment in a timely fashion stifles later investment in upgrading the network.
This was the pattern of Iridium, Motorola's famously disastrous network of 66 satellites that would allow the owner of an Iridium phone to make a phone call from literally anywhere in the world. This was permanet on a global scale. Building and launching the satellites cost billions of dollars, the handsets cost hundreds, the service cost dollars a minute, all so the busy executive could make a call from the veldt.
Unfortunately, busy executives don't work in the veldt. They work in Pasedena, or Manchester, or Caracas. This is the SUV pattern -- most SUV ads feature empty mountain roads but most actual SUVs are stuck in traffic. Iridium was a bet on a single phone that could be used anywhere, but its high cost eroded any reason use an Iridium phone in most of the perfectly prosaic places phone calls actually get made.
- 3G: Going, Going, Gone
The biggest and most expensive permanet effort right now is wireless data services, principally 3G, the so-called 3rd generation wireless service, and GPRS, the General Packet Radio Service (though the two services are frequently lumped together under the 3G label.) 3G data services provide always on connections and much higher data rates to mobile devices than the widely deployed GSM networks do, and the wireless carriers have spent tens of billions worldwide to own and operate such services. Because 3G requires licensed spectrum, the artificial scarcity created by treating the airwaves like physical property guarantees limited competition among 3G providers.
The idea behind 3G is that users want to be able to access data any time anywhere. This is of course true in the abstract, but there are two caveats: the first is that they do not want it at any cost, and the second and more worrying one is that if they won't use 3G in environments where they have other ways of connecting more cheaply.
The nearlynet to 3G's permanet is Wifi (and, to a lesser extent, flat-rate priced services like email on the Blackberry.) 3G partisans will tell you that there is no competition between 3G and Wifi, because the services do different things, but of course that is exactly the problem. If they did the same thing, the costs and use patterns would also be similar. It's precisely the ways in which Wifi differs from 3G that makes it so damaging.
The 3G model is based on two permanetish assumptions -- one, that users have an unlimited demand for data while traveling, and two, that once they get used to using data on their phone, they will use it everywhere. Both assumptions are wrong.
First, users don't have an unlimited demand for data while traveling, just as they didn't have an unlimited demand for talking on the phone while flying. While the mobile industry has been telling us for years that internet-accessible cellphones will soon outnumber PCs, they fail to note that for internet _use_, measured in either hours or megabytes, the PC dwarfs the phone as a tool. Furthermore, in the cases where users do demonstrate high demand for mobile data services by getting 3G cards for their laptops, the network operators have been forced to raise their prices, the opposite of the strategy that would drive use. Charging more for laptop use makes 3G worse relative to Wifi, whose prices are constantly falling (access points and Wifi cards are now both around $60.)
The second problem is that 3G services don't just have the wrong prices, they have the wrong kind of prices -- metered -- while Wifi is flat-rate. Metered data gives the user an incentive to wait out the cab ride or commute and save their data intensive applications for home or office, where sending or receiving large files creates no additional cost. The more data intensive a users needs are, the greater the price advantage of Wifi, and the greater their incentive to buy Wifi equipment. At current prices, a user can buy a Wifi access point for the cost of receiving a few PDF files over a 3G network, and the access point, once paid for, will allow for unlimited use at much higher speeds.
- The Vicious Circle
In airline terms, 3G is like the airphone, an expensive bet that users in transit, captive to their 3G provider, will be happy to pay a premium for data communications. Wifi is like the cell phone, only useful at either end of travel, but providing better connectivity at a fraction of the price. This matches the negative feedback loop of the airphone -- the cheaper Wifi gets, both in real dollars and in comparison to 3G, the greater the displacement away from 3G, the longer it will take to pay back the hardware investment (and, in countries that auctioned 3G licenses, the stupefying purchase price), and the later the day the operators can lower their prices.
More worryingly for the operators, the hardware manufacturers are only now starting to toy with Wifi in mobile devices. While the picture phone is a huge success as a data capture device, the most common use is "Take picture. Show friends. Delete." Only a fraction of the photos that are taken are sent over 3G now, and if the device manufacturers start making either digital cameras or picture phones with Wifi, the willingness to save a picture for free upload later will increase.
Not all permanets end in total failure, of course. Unlike Iridium, 3G is seeing some use, and that use will grow. The displacement of use to cheaper means of connecting, however, means that 3G will not grow as fast as predicted, raising the risk of being too little used to be profitable.
- Partial Results from Partial Implementation
In any given situation, the builders of permanet and nearlynet both intend to give the customers what they want, but since what customers want is good cheap service, it is usually impossible to get there right away. Permanet and nearlynet are alternate strategies for evolving over time.
The permanet strategy is to start with a service that is good but expensive, and to make it cheaper. The nearlynet strategy is to start with a service that is lousy but cheap, and to make it better. The permanet strategy assumes that quality is the key driver of a new service, and permanet has the advantage of being good at every iteration. Nearlynet assumes that cheapness is the essential characteristic, and that users will forgo quality for a sufficient break in price.
What the permanet people have going for them is that good vs. lousy is not a hard choice to make, and if things stayed that way, permanet would win every time. What they have going against them, however, is incentive. The operator of a cheap but lousy service has more incentive to improve quality than the operator of a good but expensive service does to cut prices. And incremental improvements to quality can produce disproportionate returns on investment when a cheap but lousy service becomes cheap but adequate. The good enough is the enemy of the good, giving an edge over time to systems that produce partial results when partially implemented.
- Permanet is as Permanet Does
The reason the nearlynet strategy is so effective is that coverage over cost is often an exponential curve -- as the coverage you want rises, the cost rises far faster. It's easier to connect homes and offices than roads and streets, easier to connect cities than suburbs, suburbs than rural areas, and so forth. Thus permanet as a technological condition is tough to get to, since it involves biting off a whole problem at once. Permanet as a personal condition, however, is a different story. From the user's point of view, a kind of permanet exists when they can get to the internet whenever they like.
For many people in the laptop tribe, permanet is almost a reality now, with home and office wired, and any hotel or conference they attend Wifi- or ethernet-enabled, at speeds that far outstrip 3G. And since these are the people who reliably adopt new technology first, their ability to send a spreadsheet or receive a web page faster and at no incremental cost erodes the early use the 3G operators imagined building their data services on.
In fact, for many business people who are the logical customers for 3G data services, there is only one environment where there is significant long-term disconnection from the network: on an airplane. As with the airphone itself, the sky may be a connection-poor environment for some time to come, not because it isn't possible to connect it, but because the environment on the plane isn't nearly nearlynet enough, which is to say it is not amenable to inexpensive and partial solutions. The lesson of nearlynet is that connectivity is rarely an all or nothing proposition, much as would-be monopolists might like it to be. Instead, small improvements in connectivity can generally be accomplished at much less cost than large improvements, and so we continue growing towards permanet one nearlynet at a time.
=========================================
A comparison between the unregulated "dot.com" businesses and the airlines. Capitalism at work. Government regulations do not, despite the "best intentions".
Although the war seems indelibly linked to today's economy, it isn't the reason the airline industry, for one, is failing. Nor is terrorism the cause, though both are making profits harder to come by. Perhaps, as the Fleet Street Letter's Lynn Carpenter suggests, the culprit is instead a 'lack of capitalism'...
UNITED AIRLINES DOT COM By Lynn Carpenter
It seems we don't have much respect for good old capitalism. And now, with the airline industry, we have a chance to let capitalism work (though I doubt we will).
I've changed my mind about the dot-com bubble in light of the airline industry's woes. The bubble wasn't a mistake at all. It was only capitalism. It was gloriously messy, silly, energetic and lovably effective capitalism.
If American passenger airlines had been run like dot-com startups, we would have better airlines. We might even get a chicken salad sandwich every few hours or thousand miles. Just consider the differences and eerie parallels between the dot-com mess and the airline mess.
Most of the bubble Internet businesses had (1) no business plan, (2) no profits and sometimes (3) no sales, either. Ditto the airlines, except that they have sales. Actually, we can credit them with two and a half out of out of three weaknesses, because airlines are cyclic and sometimes even the sales aren't very good.
At the height of the dot-com bubble, I wondered how anyone could have thrown good investment money after hopeless dot- com ideas. But I wasn't fair when counting the gains. The tech bubble actually did a lot of good. Look at the positives:
First, a number of fools were parted from their money, as they should be. Fools who get too much money and keep it too long are apt to overestimate their wisdom. Sometimes they run for public office...like Nelson Rockefeller and Ross Perot.
Second, more to the purpose, unfettered opportunity often creates riches that draw more opportunists. Capitalists multiply around new opportunities until they cheapen the product (literally and figuratively) to the point where we can all afford it. In an eye blink, the Internet went from a system available only to million-dollar-budget military users to universities to the street level. If airlines had done the same, we'd all have commuter planes in our backyards and they'd be teaching Flyers-Ed in high schools.
Once the opportunists drag a new idea down to the street level, even more entrepreneurs pile on. Again, this only happens where capitalism is free and unrestrained. It didn't happen with radio or television because licensing brought about the restriction of ownership.
But where the government doesn't stand in the way, competitors eventually overpopulate. That's what happened with the dot coms. Now, dot-com competitors have to fight each other for survival, looking for ways to reach us and make us their customers...and keep us that way.
Tough for them, great for us. Books-a-Million and several others didn't make it, but they pushed Amazon to higher standards. Amazon had to excel to keep its lead.
All across the industry, dot coms had to find things we would want and would use. This kind of open competition speeds innovation and practical adaptation.
Open competition encourages creative product uses - of which the original inventors never dreamed. The Internet began as a secure communications system, but for us, it became the world's most convenient bookstore, forum for public auctions, provider of online medical diagnoses, source of music, center of postage-free e-mail, mortgage calculator, instant stock quote ticker, instant news deliverer, cheaper and cheaper brokerage commissioner...
When capitalism works right, greed is good. Greedy hordes of new entrepreneurs who want to be millionaires keep up the pressure on the first-movers. To win the contest, the entrepreneurs have to work harder and better, offer more to their customers, upgrade products, cut prices...
Then comes the great shootout. Supply exceeds demand, the weaklings fail and the strong settle down to viable business plans.
But the airlines didn't follow this path. During their early years, government regulated them heavily. In that time, the customs, procedures and labor practices born in a monopoly became part of the business. Then, with deregulation, the field opened up for a while. But the competition was never as robust as in the dot coms. Routes are still regulated and limited. Flying time is limited well, I don't mind that...since I'd prefer my pilot was not flying double-shifts on uppers to stay awake).
Recently, I requested an aisle or window seat on a crowded plane but couldn't get one. When I boarded, I found the window seat in my row was given to a non-paying employee returning home. Would a restaurateur turn away a paying customer to feed the assistant cook at the best table in the house - for free - on his night off? Of course not! Paid sales always come before employee perks and freebies in well-run businesses, unless the freebie has advertising value. Likewise, not long ago I worked with a man who had once been an airline desk employee. He still flies frequently on standby for almost nothing...yet he hasn't worked for the airline in nearly 10 years. Imagine if McDonalds had to serve free hamburgers for life to every kid who took a summer job with them.
Now the airlines are in trouble. But they won't go as quickly and quietly as the dot coms. In fact, there are rumors of some very bad ideas. Here's a big-government plan that would turn even a socialist's stomach: nationalize the airlines.
Who would want them in their current configuration? The ghost of Joseph Stalin? A lunatic who escaped and accidentally got elected to Congress? Sir Richard Branson, founder of Virgin Airways, once told Warren Buffett that the fastest way to become a millionaire was to start out as a billionaire and buy an airline.
I have one suggestion for the airlines: let 'em go.
You probably think that would disrupt the economy. Bingo! Of course it would disrupt the economy. Absolutely. That's the plan. It's called capitalism. Stand out of the way, and let supply and demand do the work. Not allowing airlines to 'hurt the economy' in the short run will seriously harm it in the long run.
We will never be so lucky, though. Government's bound to do something. We taxpayers will either fund more bailouts or inherit a cancer-ridden business, complete with the continuing wondrous management of the industry experts who pumped their businesses full of carcinogens in the first place.
And now we are at war. Airlines are crying foul again. Don't let them fool you: the war and terrorism only accelerated what was already happening. The airlines are in this condition because, in the past, too much governmental support, regulation and monopoly allowed them to grow fat and lazy. They never developed the muscles they needed for more difficult times.
The federal budget deficit grows every day. The war adds to the cost. I, for one, certainly don't want to put an airline bailout on the shopping list.
1. Never, under any circumstances, take a sleeping pill
and a laxative on the same night.
2. If you had to identify, in one word, the reason why the
human race has not achieved, and never will achieve, its
full potential, that word would be "meetings."
3. There is a very fine line between "hobby" and "mental
illness."
4. People who want to share their religious views with you
almost never want you to share yours with them.
5. You should not confuse your career with your life.
6. No matter what happens, somebody will find a way to
take it too seriously.
7. When trouble arises and things look bad, there is
always one individual who perceives a solution and is
willing to take command. Very often, that individual is
crazy.
8. Nobody cares if you can't dance well. Just get up and
dance.
9. Never lick a steak knife.
10. Take out the fortune before you eat the cookie.
11. The most powerful force in the universe is gossip.
12. You will never find anybody who can give you a clear
and compelling reason why we observe daylight savings
time.
13. You should never say anything to a woman that even
remotely suggests that you think she's pregnant unless you
can see an actual baby emerging from her at that moment.
14. The one thing that unites all human beings, regardless
of age, gender, religion, economic status or ethnic
background, is that, deep down inside, we ALL believe that
we are above average drivers.
15. The main accomplishment of almost all organized
protests is to annoy people who are not in them.
16. A person who is nice to you, but rude to the waiter is
not a nice person. (This is very important. Pay attention.
It never fails.)
17. Your friends love you anyway.
18. Never be afraid to try something new. Remember that a
lone amateur built the Ark. A large group of professionals
built the Titanic.
A close look at a post bubble survivor...and it may be the next 'large" IPO. They rightly have recognized the two most important factors in success....The limits of the human lifespan (time and attention); and the generation of trust.
... Its performance is the envy of executives and engineers around the world ... For techno-evangelists, Google is a marvel of Web brilliance ... For Wall Street, it may be the IPO that changes everything ( again ) ... But Google is also a case study in savvy management -- a company filled with cutting-edge ideas, rigorous accountability, and relentless attention to detail ... Here's a search for the growth secrets of one of the world's most exciting young companies -- a company from which every company can learn.
by Keith H. Hammonds
On Tuesday morning, January 21, the world awoke to nine new words on the home page of Google Inc., purveyor of the most popular search engine on the Web: "New! Take your search further. Take a Google Tour." The pitch, linked to a demo of the site's often overlooked tools and services, stayed up for 14 days and then disappeared.
To most reasonable people, the fleeting house ad seemed inconsequential. But imagine that you're unreasonable. For a moment, try to think like a Google engineer -- which pretty much requires being both insanely passionate about delivering the best search results and obsessive about how you do that.
If you're a Google engineer, you know that those nine words comprised about 120 bytes of data, enough to slow download time for users with modems by 20 to 50 milliseconds. You can estimate the stress that 120 bytes, times millions of searches per minute, put on Google's 10,000 servers. On the other hand, you can also measure precisely how many visitors took the tour, how many of those downloaded the Google Toolbar, and how many clicked through for the first time to Google News.
This is what it's like inside Google. It is a joint founded by geeks and run by geeks. It is a collection of 650 really smart people who are almost frighteningly single-minded. "These are people who think they are creating something that's the best in the world," says Peter Norvig, a Google engineering director. "And that product is changing people's lives."
Geeks are different from the rest of us, so it's no surprise that they've created a different sort of company. Google is, in fact, their dream house. It also happens to be among the best-run companies in the technology sector. At a moment when much of business has resigned itself to the pursuit of sameness and safety, Google proposes an almost joyous antidote to mediocrity, a model for smart innovation in challenging times.
Google's tale is a familiar one: Two Stanford doctoral students, Sergey Brin and Larry Page, developed a set of algorithms that in 1998 sparked a holy-shit leap in Web-search performance. Basically, they turned search into a popularity contest. In addition to gauging a phrase's appearance on a Web page, as other engines did, it assessed relevance by counting the number and importance of other pages that linked to that page.
Since then, newer search products such as Teoma and Fast have essentially matched Google's advance. But Google remains the undisputed search heavyweight. Google says it processes more than 150 million searches a day -- and the true number is probably much higher than that. Google's revenue model is notoriously tough to deconstruct: Analysts guess that its revenue last year was anywhere from $60 million to $300 million. But they also guess that Google made quite a bit of money.
As a result, there is constant, hopeful speculation among financiers around an initial public offering, a deal that could be this decade's equivalent of the 1995 Netscape IPO. A few years back, such a deal might have valued Google at $3 billion or more. Even today, a Google offering might fetch $1 billion.
For now, though, most of the cars in the lot outside Google's modest offices in a Mountain View, California office park are beat-up Volvos and Subarus, not Porsches. And while Googlers may relish their shot at impossible wealth, they appear driven more by the quest for impossible perfection. They want to build something that searches every bit of information on the Web. More important, they want to deliver exactly what the user is looking for, every time. They know that this won't ever happen, and yet they keep at it. They also pursue a seemingly gratuitous quest for speed: Four years ago, the average search took approximately 3 seconds. Now it's down to about 0.2 seconds. And since 0.2 is more than zero, it's not quite fast enough.
Google understands that its two most important assets are the attention and trust of its users. If it takes too long to deliver results or an additional word of text on the home page is too distracting, Google risks losing people's attention. If the search results are lousy, or if they are compromised by advertising, it risks losing people's trust. Attention and trust are sacrosanct.
Google also understands the capacity of the Web to leverage expertise. Its product-engineering effort is more like an ongoing, all-hands discussion. The site features about 10 technologies in development, many of which may never be products per se. They are there because Google wants to see how people react. It wants feedback and ideas. Having people in on the game who know a lot of stuff tells you earlier whether good ideas are good ideas that will actually work.
But what is most striking about Google is its internal consistency. It is a beautifully considered machine, each piece seemingly true to all the rest. The appearance of advertising on a page, for example, follows the same rules that dictate search results or even new-product innovation. Those rules are simple, governed by supply, demand, and democracy -- which is more or less the logic of the Internet too.
Like its search engine, Google is a company overbuilt to be stronger than it has to be. Its extravagance of talent allows it crucial flexibility -- the ability to experiment, to try many things at once. "Flexibility is expensive," says Craig Silverstein, a 30-year-old engineer who dropped his pursuit of a Stanford PhD to become Google's first employee. "But we think that flexibility gives you a better product. Are we right? I think we're right. More important, that's the sort of company I want to work for."
And the sort of company that every company can learn from. What follows, then, is our effort to "google" Google: to search for the growth secrets of one of the world's most exciting growth companies. Like the logic of the search-engine itself, our search was deep and democratic. We didn't focus on Google's big three: CEO Eric Schmidt and founders Brin and Page. Instead, we went into the ranks and talked with the project managers and engineers who make Google tick. Here's what we learned.
Rule Number One: The User Is in Charge
"There are people searching the Web for 'spiritual enlightenment.' " Peter Norvig says this with such utter solemnity that it's impossible to tell for sure whether he gets the irony. Then again, Norvig is the guy who authored a hilarious PowerPoint translation of Lincoln's Gettysburg Address ( available at http://www.norvig.com ), a geek classic. So maybe he's having fun.
But he's also making a point. When someone enters a query on Google for "spiritual enlightenment," it's not clear what he's seeking. The concept of spiritual enlightenment means something different from what the two words mean individually. Google has to navigate varying levels of literality to guess at what the user really wants.
This is where Googlers live, amid semantic, visual, and technical esoterica. Norvig is Google's director of search quality, charged with continuously improving people's search results. Google tracks the outcome of a huge sample of the queries that we throw at it. What percentage of users click on the first result that Google delivers? How many users click on something from the first page? Norvig's team members scour the data, looking for trouble spots. Then they tweak the engine.
The cardinal rule at Google is, If you can do something that will improve the user's experience, do it. It is a mandate in part born of paranoia: There's always a chance that the Google destroyer is being pieced together by two more guys in a garage. By some estimates, Google accounts for three-quarters of all Web searches. But because it's not perfect, being dominant isn't good enough. And the maniacal attack on imperfection reflects a genuine belief in the primacy of the customer.
That's why Google must correctly interpret searches by Turks and Finns, whose queries resemble complete sentences, and in Japanese, where words run together without spaces. It has to understand not only the meanings of individual words but also the relationships of those words to other words and the characteristics of those words as objects on a Web page. ( A page that displays a search word in boldface or in the upper-right-hand corner, for example, will likely rank higher than a page with the same words displayed less prominently.
It's why the difference between 0.3 seconds and 0.2 seconds is pretty profound. Most searches on Google actually take less than 0.2 seconds. That extra tenth of a second is all about the outliers: queries crammed with unrelated words or with words that are close in meaning. The outliers can take half a second to resolve -- and Google believes that users' productivity begins to wane after 0.2 seconds. So its engineers find ways to store ever-more-arcane Web-text snippets on its servers, saving the engine the time it takes to seek out phrases when a query is made.
And it's why, most of the time, the Google home page contains exactly 37 words. "We count bytes," says Google Fellow Urs Holzle, who is on leave from the University of California at Santa Barbara. "We count them because our users have modems, so it costs them to download our pages."
Just as important, every new word, button, or feature amounts to an assault on the user's attention. "We still have only one product," Holzle says. "That's search. People come to Google to search the Web, and the main purpose of the page is to make sure that you're not distracted from that search. We don't show people things that they aren't interested in, because in the long term, that will kill your business."
Google doesn't market itself in the traditional sense. Instead, it observes, and it listens. It obsesses over search-traffic figures, and it reads its email. In fact, 10 full-time employees do nothing but read emails from users, distributing them to the appropriate colleagues or responding to them themselves. "Nearly everyone has access to user feedback," says Monika Henzinger, Google's director of research. "We all know what the problem areas are, where users are complaining."
The upshot is that Google enjoys a unique understanding of its users -- and a unique loyalty. It has managed a remarkable feat: appealing to tech-savvy Web addicts without alienating neophytes who type in "amazon.com" to find . . . Amazon.com. ( Yes, people really do that. Google doesn't know why.)
"Google knows how to make geeks feel good about being geeks," says Cory Doctorow, prominent geek, blogger, and technology propagandist. Google has done that from the beginning, when Brin and Page basically laid open their stunning new technology in a 1998 conference paper. They invited in the geeks in and made them feel as if they were in on something special.
But they didn't forget to make everyone else feel special too. They still do, by focusing relentlessly on the quality of the experience. Make it easy. Make it fast. Make it work. And attack everything that gets in the way of perfection.
Rule Number Two: The World Is Your R&D Lab
Paul Bausch is a 29-year-old Web developer in Corvallis, Oregon. He works with ASP, SQL Server, Visual Basic, XML, and a host of other geek-only technologies. He helped create Blogger, a widely used program that helps people set up their own Web log. And in a way that's intentionally imprecise, he's part of Google's research effort.
"Isn't this great?" exclaims Nelson Minar, a senior Google engineer. Minar and I are fooling with Bausch's quirky creation called Google Smackdown, where you can compare the volume of Google citations for any two competing queries. ( The New York Yankees slam the New York Mets ;war conquers peace . ) Google loosed Smackdown and other eccentric Web novelties when it released a developer's kit last spring that lets anyone integrate Google's search engine into their own application. The download is simple, and the license is free for the taking.
Here's the scary bit: Basically, those developers can do whatever they want . The only control that Google exerts is a cap of 1,000 queries per day per license to guard against an onslaught that might bring down its servers. In most cases, Minar and his colleagues have no idea how people use the code. "It's kind of frustrating," he concedes. "We would love to see what they're doing."
Most companies would sooner let temps into the executive washroom than let customers -- much less customers who can hack -- anywhere near their core intellectual property. Google, though, grasps the power of an engaged community. The developer's kit is a classic Trojan-horse strategy, putting Google's engine in places that the company might not have imagined. More important, Bausch says, opening up the technology kimono "turns the world into Google's development team."
Sites like Smackdown, while basically toys, "are an inkling of what Google could be used for," Minar says. "We can't predict what will happen. But we can predict that there will be an effect on our technology and on the way the world views us." And more likely than not, it will be something pretty cool.
Rule Number Three: Failures Are Good. Good Failures Are Better.
In Google Labs, just two clicks away from its home page, anyone can test-drive Google Viewer, sort of a motion-picture version of your search results, or Voice Search, a tool that lets you phone in a query and then see your results online. Is either ready for prime time? Not really. ( Try them out. On Voice Search, you're as likely to get someone else's results as your own.
But that's the point. The Labs reflect a shared ethos between Google and its users that allows for public experimentation -- and for failure. People understand that not everything Google puts on view will work perfectly. They also understand that they are part of the process: They are free to tell Google what's great, what's not, and what might work better.
"Unlike most other companies," observes Matthew Berk, a senior analyst at Jupiter Research, Google has said, 'We're going to try things, and some aren't going to work. That's okay. If it doesn't work, we'll move on.' "
In the search business, failure is inevitable. It comes with the territory. A Web search, even Google's, doesn't always give you exactly what you want. It is imperfect, and that imperfection both allows and requires failure. Failure is good.
But good failures are even better. Good failures have two defining characteristics. First, says Urs Holzle, "you know why you failed, and you have something you can apply to the next project." When Google experimented with thumbnail pictures of actual Web pages next to results, it saw the effect that graphical images had on download times. That's one reason why there are so few images anywhere on Google, even in ads.
But good failures also are fast. "Fail," Holzle says. "But fail early." Fail before you invest more than you have to or before you needlessly compromise your brand with a shoddy product.
Rule Number Four: Great People Can Manage Themselves
Google spends more time on hiring than on anything else. It knows this because, like any bunch of obsessive engineers, it keeps track. It says that it gets 1,500 resumes a day from wanna-be Googlers. Between screening, interviewing, and assessing, it invested 87 Google people-hours in each of the 300 or so people that it hired in 2002.
Google hires two sorts of engineers, both aimed at encouraging the art of fast failure. First, it looks for young risk takers. "We look for smart," says Wayne Rosing, who heads Google's engineering ranks. "Smart as in, do they do something weird outside of work, something off the beaten path? That translates into people who have no fear of trying difficult projects and going outside the bounds of what they know."
But Google also hires stars, PhDs from top computer-science programs and research labs. "It has continually managed to hire 90% of the best search-engine people in the world," says Brian Davison, a Lehigh University assistant professor and a top search expert himself. The PhDs are Google's id. They are the people who know enough to shoot holes in ideas before they go too far -- to make the failures happen faster.
The challenge is negotiating the tension between risk and caution. When Rosing started at Google in 2001, "we had management in engineering. And the structure was tending to tell people, No, you can't do that." So Google got rid of the managers. Now most engineers work in teams of three, with project leadership rotating among team members. If something isn't right, even if it's in a product that has already gone public, teams fix it without asking anyone.
"For a while," Rosing says, "I had 160 direct reports. No managers. It worked because the teams knew what they had to do. That set a cultural bit in people's heads: You are the boss. Don't wait to take the hill. Don't wait to be managed."
And if you fail, fine. On to the next idea. "There's faith here in the ability of smart, well-motivated people to do the right thing," Rosing says. "Anything that gets in the way of that is evil."
Rule Number Five: If Users Come, So Will the Money
Google has no strategic-planning department. CEO Eric Schmidt hasn't decreed which technologies his engineers should dabble in or which products they must deliver. Innovation at Google is as democratic as the search technology itself. The more popular an idea, the more traction it wins, and the better its chances.
Here's how one Google service came into the world. In December 2001, researcher Krishna Bharat posted an internal email inviting Googlers to check out his first crack at a dynamic news service. Although Google offered a basic headline service at the time, news was not a corporate mandate. This was simply Bharat's idea. As a respected PhD hired away from Compaq and a member of the company's 10-person research lab, coming up with new ideas is basically Bharat's job.
For an early prototype, it was quite a piece of work. Bharat had built an engine that crawled 20 news sources once an hour, automatically delivering the most recent stories on in-demand topics -- something like a virtual wire editor. And within Google, it got a lot of attention. Importantly, it attracted the attention of Marissa Mayer, a young engineer turned project manager.
Mayer connected Bharat with an engineering team. And within a month and a half, Google had posted on its public site a beefed-up version of the text-based demo, which is now called Google News and which features 155 sources and a search function. Within three weeks of going public, the service was getting 70,000 users a day.
One reason Google puts its innovations on public display is to identify failures quickly. Another reason is to find winners. For Bharat and Mayer, those 70,000 users provided ammunition to build a case for News within Google. "A public trial helps you go fast," Mayer says. "If it works, it builds internal passion and fervor. It gets people thinking about the problem."
Soon, Mayer had marshaled a handful of engineers to bulk up News. They expanded its reach to more than 4,000 sources, updated continuously instead of hourly. They created an engine that was robust enough to support five times the anticipated early volume. And they prettied it up, designing an interface that displayed hundreds of headlines and photos but that was still easy to navigate. By September, the new News was up.
Is Google News an actual product? Not exactly. Its home page is still labeled Beta, as are all but a few of Google's offerings. It may become a Google fixture, it may disappear, or it may recede into Google Labs. Mayer is still studying the traffic, and the engineers are still tweaking, reacting to users' emails.
The company's organic approach to invention bugs some onlookers. "Google is a great innovator," says Danny Sullivan, editor of Search Engine Watch and an influential commentator. "They keep rolling out great things. But Google News was an engineer deciding he wanted a news engine. Now Google has this product, and it doesn't know how to make money off of it."
Sullivan is onto something important: At some point, all of this great stuff has to turn a profit. That was the one great moral of the dotcom blowout: "Monetizing eyeballs" turned out to mean "throwing money down a sinkhole." When Mayer argues that "the traffic will let us know" whether News is a success, she's echoing a long line of now-unemployed executives who thought that they had tamed the business cycle.
But at Google, building and then following the traffic makes perfect sense. It's central to the company's culture and its operating logic. Consider this: For the first 18 months of its existence, Google didn't make a penny from its basic Web-search service. Only then did it make the transition from great technology to great technology with a critical mass of users.
And Google was able to package that traffic in ways that seem both ingenious and completely synchronous. The search service itself remained free. But Google has, for example, sold untold numbers of ads pegged to specific search keywords. ( Not surprisingly, Fast Company slips in a paid ad to the side of your results whenever your query includes fast company .
Advertisers don't just pay a set rate, or even a cost per thousand viewers. They bid on the search term. The more an advertiser is willing to pay, the higher its ad will be positioned. But if the ad doesn't get clicks, its rank will decline over time, regardless of how much has been bid. If an ad is persistently irrelevant, Google will remove it: It's not working for the advertiser, it's not serving users, and it's taking up server capacity.
This is how it is at Google. Google News attracted eyeballs among Bharat's employees, so it made the leap to the public domain. If enough users like it, it will have real power with advertisers. And traffic for advertisers will beget even more traffic for advertisers.
So yes, Mayer has a revenue strategy. She's had one since January 2002, before the first version of News went public. She won't say what it is, but if News can build enough traffic, Google almost surely will seek advertising. It will probably resell the service to portals and other commercial sites, just as it does with its core Web search. ( Every time you see the Google logo on a corporate site, the company is likely paying at least $25,000 a year for a Google server. ) "But we're not in a hurry," Mayer says. "We're focused on making News a great experience. Until we figure out whether the product has traction, there's no rush to execute the revenue plan."
Could it be any simpler? Build great products, and see if people use them. If they do, then you have created value. And if you've truly done that, then you have a business. Says Mayer: "Our motto here is, There's no such thing as success-failure on the Net." In other words, if users win, then Google wins. Long live democracy.
Sidebar: Just how big is Google?
That's hard to say. Officially, Google says that it processes more than 150 million searches a day, but the true number is probably much higher. According to Nielsen/NetRatings, 67.6 million people worldwide visited Google an average of 6.2 times last December. Analysts guess that last year's revenue was between $60 million and $300 million.
Sidebar: A Gaggle of Google Games
While tens of millions of people like Google, a disconcertingly large minority are obsessed with it. Since 1999, techies have invested many hours and much creativity into devising a wide range of Google-based parlor games and curiosities. Here's a sampling, courtesy of Google and Cameron Marlow at MIT's Media Lab.
Googlewhack Find two words which, when combined in a Google query, deliver one and only one result. http://www.googlewhack.com claims that it has recorded 120,000 whacks since January 2002. Among recent entries to its "Whack Stack" are prevarication pileups and hiccupping flubber. ( A Fast Company original: defamatory meerkats.
Googlebomb Geek terrorism. Taking advantage of a Google loophole, Googlebombers gang up to mass-hyperlink a target page with a specific ( usually derogatory ) phrase. Google picks up on the links, even if the phrase isn't on the page itself. The legendary first, incited by Adam Mathes in April 2001, tagged Mathes's friend Andy Pressman's site with the words "talentless hack." For a while, it stuck.
Googleshare The invention of blogger Steven Berlin Johnson. Search Google for one word. Then search those results for the name of a person. Divide the number of results delivered for your second search by those from the first to get that person's "semantic mindshare" of the word.
Googlism Type in your name, someone else's name, or a date, place, or thing at http://www.googlism.com. The application, written by a team at Domain Active in Australia, uses Google to deliver Web-based definitions of your phrase. Bill Gates, for example, is "the anti-Christ," "a thief," "a hero," and "a wanker."
Google Smackdown Two queries. One search engine. A "terabyte tug-of-war," as its creator, Paul Bausch, calls it. Just plug in two competing words or phrases at http://www.onfocus.com/googlesmack/down.asp , and see which delivers more Google results. ( Google, with 17.5 million, suffers a rare embarrassment at the hands of God, with 42.6 million.
Sidebar: How does Google keep innovating?
One big factor is the company's willingness to fail. Google engineers are free to experiment with new features and new services and free to do so in public. The company frequently posts early versions of new features on the site and waits for its users to react. "We can't predict exactly what will happen," says senior engineer Nelson Minar.
Keith H. Hammonds ( khammonds@fastcompany.com ) is a Fast Company senior editor based in New York. posted by James
2:05 PM
Booms Cause Busts...duh...
Porter Stansberry, noted economic guru, writes this after reading a new book that "discovers" that economic booms always cause busts. Further looks at historical numbers, however, show that the current 1980-90's boom -with its infrastructure buildout - was not financed from profits, as were all others since 1930....but from borrowings. Hmmm...
" Today the average debt to equity ratio of the Dow Jones Industrial Average and the Dow Jones Transportation Index is three times. And in many "blue-chip" stocks, the numbers are much worse. Ford, for example, carries debt equal to 29 times equity on its balance sheet. "
Debt Service is Job One now....
Go read the complete (one page) essay :
Profitless Expansion
Excerpt -
================
These cellular phone companies suffer terrible economics. They cannot rationalize their prices to match their costs because of customer churn. Nor can they resist system upgrades because of rapid system degradation. Finally, not only must they operate capital-intensive networks, they must do so under a highly regulated telecom regime, based on government extortion for spectrum. The same problems have plagued the airline industry since its inception. In fact, a study cited by Warren Buffett shows that as an industry, cumulatively, airlines have never turned a profit.
The problems of the cellular phone companies, to some degree, are shared by all of corporate America. Massive investments –Ê in excess of profits –Ê over the last 22 years have robbed almost every industry of its pricing power. And the current problem of low returns on assets finds a twin in over-indebtedness: the 22-year boom we experienced in fixed investment has caused them both.
- ÊÊÊÊÊÊÊÊÊ We cannot back out of having a war with Iraq. If tomorrow Saddam says, "Hey! You win! I'm outta here!" what would happen to the "war premium" that is in bonds?
I mean, if the world suddenly went to some happy mode, then what chump in his right mind would loan money to a profligate government at less than the rate of inflation? And that means that bond prices would drop like stones in the market-place. And interest rates would have increases measured in hundreds of percents. Imagine that tomorrow interest rates on short-term money market money went from today's roughly one percent, sometimes even less than one percent, all the way back up to, oh, say, six or seven percent! That would be a 700% increase in rates! Now close your eyes and envisage the newspaper headlines. "700% inflation in interest rates! Economists in shock! President plotzes! Full story, and college pigskin preview, page eight."
Now imagine that short rates go from roughly one percent to twelve percent! How about twenty? How about thirty percent A MONTH? Sounds funny and laughable, huh? Now that you are happily chuckling, and by the way you look so cute when you nose crinkles up like that and your laugh is like music to my ears, come up with the reasons why it will NOT happen! Quick! Come on! Let's go, Mister Chortle Guy! Huh? How will it not happen? Let's hear it!
And an ancillary lesson of history is that people who hold gold will not suffer overmuch. Sometimes they prosper, but they never suffer for very long. Have you ever heard of one instance, in all of history, when gold went to zero purchasing power? Is there one single apocryphal anecdote that somebody ever went broke holding gold? Midas went broke because he had all that worthless gold? They killed and ate the goose that laid the golden eggs, since they needed meat more than gold, and you could not buy meat with gold, since gold was so valueless?
I bring this up because I want it to be very clear in your mind, with a crystal-like clarity, so clear that every syllable seems limned in sparkling sunlight, reminiscent of the way my eyes seem to sparkle whenever anybody asks if perhaps I would like to have some cookies, that I am saying that we are embarking on a period of time that will seem unprecedented, and it will be very unpleasant in the extreme, both in numbers of people affected and the cost to each, and as we shade our eyes with our trembling hands and look out over the coming years we see that it will be for a long time. As measured in decades. And the one thing that you can count on, looking at the historical evidence, is that gold will protect you from whatever happens. Short of confiscation by the government. Which will probably be what happens. The bastards.
The folks at the Daily Reckoning website say, "We recommend that you buy gold, not because we know what will happen, but because we don't."
But since I am such an egotistical and conceited bundle of know-it-all snot, I will bravely go one giant step farther, and say that I personally DO know what will happen. That's right! I know precisely what will happen! And, so, I am recommending that you buy gold, and with both hands snatching it up like a greedy two-year old scrambling for dropped candy, because I DO know what will happen. And what will happen will be bad for every thing that is NOT gold! Or silver. Or commodities. It's just that gold is so handy, small and portable.
- ÊÊÊÊÊÊÊÊÊ Speaking of gold, Kelly Patricia O'Meara, of Insight Magazine, writes that "What the Bank of Portugal revealed in its 2001 annual report is that 433 tonnes [metric tons] of gold - some 70 percent of its gold reserve - either have been lent or swapped into the market. According to Bill Murphy, chairman of the Gold Anti-Trust Action Committee (GATA), a nonprofit organization that researches and studies the gold market and reports its findings at www.LeMetropoleCafe.com: 'This gold is gone - and it lends support to our years of research that the central banks do not have the 32,000 tonnes of gold in reserve that they claim.'"
"So why would banks do that?" you ask? Well, the deal was that the central banks had all this gold left over from the old days, see, and it costs money to store and guard all this gold, which was an expensive hassle and pain in the old wazoo, and so what they decided to do, in order to get a few bucks rolling in, you know, sorta getting some pin-money but at least "putting assets to work" which was a mantra that was all the rage at the time, was to loan the gold to gold-mining guys who would sell the borrowed gold to customers instead of having to go to the trouble and expense of actually digging the stuff out of the ground and getting their hands all dirty.
And everybody prospered, especially since the artificial explosion in supply kept driving the price down and down, year after year, making the whole thing more and more profitable the whole time.
Murphy goes on to explain: "The essence of the rigging of the gold market is that the bullion banks borrowed central-bank gold from various vaults and flooded the market with supply, keeping the price down. The GATA camp has uncovered information that shows that around 15,000 to 16,000 tonnes of gold have left the central banks, leaving the central-bank reserves with about half of what is officially reported."
"So what?" you ask? Well, now that the price of gold is rising, the huge, gigantic short position is becoming unprofitable for the short position. How big is the short position? Well, if you accept the notion that the 2,000 tonne per year total global output of new gold would now be used for closing out the short position, then it would require 100% of all the gold mined for the next seven years. Seven years!
As a reference, the total short position on the stocks of the NYSE, measured as a percentage of average daily volume, is measured in days, and is 5.4 days at latest calculation, but NOT weeks, and certainly NOT months, and emphatically NOT years, and, with voice rising in volume and actually shaking with emotion, NOT SEVEN, I'M TALKING SEVEN FREAKING YEARS!
.....
Now we open the file labeled, "How to make a few bucks on this tasty tidbit of news... "John Embry, the manager of last year's best-performing North American gold fund and manager of the Royal Precious Metals Fund for the Royal Bank of Canada, says he is putting his and his clients' money on the 'lunatic fringe' in this dispute." And if you care to examine my credentials, you'll notice that I am, indeed proudly, a card-carrying member of the aforementioned lunatic fringe. So if he were managing any of my money, and he is not, as far as I know, he would be doing the same thing that I am doing and I recommend that you do, too, and that is putting my money on gold.
- ÊÊÊÊÊÊÊÊÊ Doug Noland writes "Since the beginning of 1998 the ratio of Total Credit Market Debt to GDP has increased from 256% to 303%." Now this is somewhere close to the maximum, historically. Every chart you look at, even the ones that go back to before humans even evolved, and our primitive forebears were still living in trees, don't show debt ratios going higher. And those were stupid monkeys, for crying out loud! If anybody can be expected to be profligate with credit, you would think it would be some stinking primordial primates living in the jungle! So, judging from the historical record, don't go looking for an explosion of new debt formation by modern human beings, the money then used to buy things, and thus to propel the economy higher anytime soon. Maybe if those damned monkeys lined up some credit, okay, but not with modern homo sapiens.
Debt, it is said, can only be dealt with in one of two ways. Either you paid it or you repudiated it. And since the levels of debt are now so high that they cannot reasonably be expected to be paid off, that leaves repudiation of debt.
- ÊÊÊÊÊÊÊÊÊ There are still people shaking their heads at how mild the recession was. Like this is something unexplained and oddly optimistic. It is not. The mildness of the recession can be easily explained by noting that government hiring and government spending never really dropped. I maintain that the collective government system IS the economy, and as it goes so goes everything else.
- ÊÊÊÊÊÊÊÊÊ In the Daily Reckoning site today we learn that without fictional pension plan profits, earnings of the S&P 500 would have been $68.7 billion in 2001, rather than the $219 billion they said they made. That means real, take-it-to-the-bank earnings were about a third of what they reported.
And since nothing much has changed since 2001, except maybe get a lot worse, let's see what a revised P/E on the S&P 500 would be with these new, revised earnings. Right now, that index is about 810. Taking a third of the reported $30.34 in earnings, means about ten bucks in earnings. 810 divided by 10 gives a new P/E of, yikes! 81!
And the bad news is that this P/E of 81 uses the optimistic assumption that things did not get worse between 2001 and now. Which they did. In spades. Ugh.
--- Mogambo Sez: There will be a new bull market starting soon, except that this time it will not be in common stocks. It will be in commodities. And in stocks of Chinese companies. Fortunes will be made, more and more, year after year, until busboys in restaurants will be gloating about how much money they made in soybean futures and shares of the Beijing Foundry and Fish Company, and the whole bull-market euphoria and irrational exuberance thing will be in full flower again. Relax.
Moore's law of increasing number of "transistors" is giving way to power and size considerations, as everything becomes wireless...A concise explanation of chip manufacturing as part of an analysis done in the Digital Power Report, written by Peter Huber - by subscription... DPR http://www.powercosm.com/
===========================
From the Friday Gilder Letter -
THE WEEK/Morgan's Law
~~~~~~~~~~~~
Jim Morgan's law: The number of transistors on a wafer doubles even faster than you think. Gordon Moore's 18-month doubling rate refers to transistor density-the number of transistors per square inch of silicon wafer. But wafers grow too: 50 millimeters (mm) in diameter in 1970, 200 mm a couple of years ago. Intel's (INTC) first 300 mm fab went on line in early '02, and 450 mm wafers are now on the horizon. To understand both the meteoric growth of information technology and the boom-bust character of that growth, look first to Morgan's law, not Moore's.
The world builds logic structures on almost 5-billion square inches-about 800 acres, or just over one square mile-of silicon every year. The silicon itself emerges from foundries that purify and crystallize semiconductor materials these foundries are the 21st century's steel mills. Machine tools then transform the raw crystals into functional structures-a square inch of semiconductor becomes about $25 (on average) of value product-hence the $120-billion global market for integrated circuits.
No serious student of our modern economy can doubt that the 5-billion square inches will become 10, and then 20, and then 100, or that the $120 billion will grow to $250 billion, and then to $1 trillion. For most of human history, humanity fought for land because land supplied crops and trees, and thus bulk material and energy, which are the essential starting points in the pursuit of everything else. The Industrial Revolution changed that picture only in that the struggle came to center more on resources buried beneath the land, particularly coal and oil. That era is now fading into history. The essential real estate is measured today by the square inch, not by the square mile. The square mile of semiconductors the world produces every year supplies more wealth, more power, and more global dominion than entire continents' worth of arable land.
The value extracted from the silicon real estate doesn't come from peasants who till the soil, nor even from workers who labor tirelessly in the din of the factory floor. It comes from machines. The first thing you notice when you walk into a chip fab is that you don't walk into a chip fab. Almost no one does. What you walk up to, if you even get that close, is a hermetic viewing window. On the other side, you may or may not see a couple of people in bunny suits. Mainly, however, what you see is tens of millions of dollars of immaculate equipment. This is what Henry Ford's assembly line has come to, and what the foolish Karl Marx most feared: a factory in which capital has almost wholly displaced labor. The machines do what the hand weavers did before the automated loom-for all practical purposes, they do everything. The handful of workers in the chip fab don't manufacture chips; they mind the chip-manufacturing machines.
There are about 900 chip fabs worldwide. Each one costs $1 to $2 billion to build and equip. The machines and tools-not the land, labor, buildings, or raw materials-account for the lion's share of that cost. Some of the machines provide extremely pure water, air, and chemicals. Others provide extremely stable electricity. Still others assemble, package, inspect, and test the chips-this group comprises the second largest share of overall fab capital spending, accounting for about $7 billion a year, globally. The rest-which account for over two-thirds of the semi equipment business, or about $17 billion in annual spending-perform one of three fundamental functions: They deposit material on a surface (grow, dope, implant, and heat-treat); they paint blueprints on the surface so-formed (photolithography); or they remove material from the surface as dictated by the blueprint (etching, polishing).
" To find out which company builds the high-precision tools that shape,
form, and join semiconductors in the manufacture of logic chips,
powerchips, and electron-to-photon conversion devices, subscribe to the
Digital Power Report Now at http://www.digitalpowerreport.com...."
Internet architect David Reed explains how bad science created the
broadcast industry.
- - - - - - - - - - - -
By David Weinberger
March 12, 2003
There's a reason our television sets so outgun us,
spraying us with trillions of bits while we respond only with the
laughable trickles from our remotes. To enable signals to get through
intact, the government has to divide the spectrum of frequencies into
bands, which it then licenses to particular broadcasters. NBC has a
license and you don't.
Thus, NBC gets to bathe you in "Friends," followed by a very
special "Scrubs," and you get to sit passively on your couch. It's an
asymmetric bargain that dominates our cultural, economic and
political lives -- only the rich and famous can deliver their
messages -- and it's all based on the fact that radio waves in their
untamed habitat interfere with one another.
Except they don't.
"Interference is a metaphor that paints an old limitation of
technology as a fact of nature." So says David P. Reed, electrical
engineer, computer scientist, and one of the architects of the
Internet. If he's right, then spectrum isn't a resource to be divvied
up like gold or parceled out like land. It's not even a set of pipes
with their capacity limited by how wide they are or an aerial highway
with white lines to maintain order.
Spectrum is more like the colors of the rainbow, including the ones
our eyes can't discern. Says Reed: "There's no scarcity of spectrum
any more than there's a scarcity of the color green. We could
instantly hook up to the Internet everyone who can pick up a radio
signal, and they could pump through as many bits as they could ever
want. We'd go from an economy of digital scarcity to an economy of
digital abundance."
So throw out the rulebook on what should be regulated and what
shouldn't. Rethink completely the role of the Federal Communications
Commission in deciding who gets allocated what. If Reed is right,
nearly a century of government policy on how to best administer the
airwaves needs to be reconfigured, from the bottom up.
- - - - - - - - - - - -
Spectrum as color seems like an ungainly metaphor on which to hang a
sweeping policy change with such important social and economic
implications. But Reed will tell you it's not a metaphor at all.
Spectrum is color. It's the literal, honest-to-Feynman truth.
David Reed is many things, but crackpot is not one of them. He was a
professor of computer science at MIT, then chief scientist at
Software Arts during its VisiCalc days, and then the chief scientist
at Lotus during its 1-2-3 days. But he is probably best known as a
coauthor of the paper that got the Internet's architecture
right: "End-to-End Arguments in System Design."
Or you may recognize him as the author of what's come to be known as
Reed's Law -- which says the true value of a network isn't determined
by the number of individual nodes it connects (Metcalfe's Law) but by
the far higher number of groups it enables. But I have to confess
that I'm biased when it comes to David Reed. I first encountered him
in person three years ago at a tiny conference when he deftly pulled
me out of a hole I was digging for myself in front of an audience of
my betters. Since then, I've watched him be bottomlessly
knowledgeable on a technical mailing list and patiently helpful as a
source for various articles I've worked on.
It doesn't take much to get Reed to hold forth on his strong, well-
articulated political and social beliefs. But when it comes to
spectrum, he speaks most passionately as a scientist. "Photons,
whether they are light photons, radio photons, or gamma-ray photons,
simply do not interfere with one another," he explains. "They pass
through one another."
Reed uses the example of a pinhole camera, or camera obscura: If a
room is sealed against light except for one pinhole, an image of the
outside will be projected against the opposite wall. "If photons
interfered with one another as they squeezed through that tiny hole,
we wouldn't get a clear image on that back wall," Reed says.
If you whine that it's completely counterintuitive that a wave could
squeeze through a pinhole and "reorganize" itself on the other side,
Reed nods happily and then piles on: "If photons can pass through one
another, then they aren't actually occupying space at all, since the
definition of 'occupying' is 'displacing.' So, yes, it's
counterintuitive. It's quantum mechanics."
Surprisingly, the spectrum-as-color metaphor turns out to be not
nearly as confounding to what's left of common sense. "Radio and
light are the same thing and follow the same laws," Reed
says. "They're distinguished by what we call frequency." Frequency,
he explains, is really just the energy level of the photons. The
human eye detects different frequencies as different colors. So, in
licensing frequencies to broadcasters, we are literally regulating
colors. Crayola may own the names of the colors it's invented, and
Pantone may own the standard numbers by which digital designers refer
to colors, but only the FCC can give you an exclusive license to a
color itself.
Reed prefers to talk about "RF [radio frequency] color," because the
usual alternative is to think of spectrum as some large swatch of
property. If it's property, it is easily imagined as finite and
something that can be owned. If spectrum is color, it's a lot harder
to think of in that way. Reed would recast the statement "WABC-AM has
an exclusive license to broadcast at 770 kHz in NYC" to "The
government has granted WABC-AM an exclusive license to the color
Forest Green in NYC." Only then, according to Reed, does the current
licensing policy sound as absurd as it is.
But if photons don't interfere, why do our radios and cellphones go
all crackly? Why do we sometimes pick up two stations at once and not
hear either well enough?
The problem isn't with the radio waves. It's with the
receivers: "Interference cannot be defined as a meaningful concept
until a receiver tries to separate the signal. It's the processing
that gets confused, and the confusion is highly specific to the
particular detector," Reed says. Interference isn't a fact of nature.
It's an artifact of particular technologies. This should be obvious
to anyone who has upgraded a radio receiver and discovered that the
interference has gone away: The signal hasn't changed, so it has to
be the processing of the signal that's improved. The interference was
in the eye of the beholder all along. Or, as Reed says, "Interference
is what we call the information that a particular receiver is unable
to separate."
But, Reed says, "I can't sign on to 'It's the receiver, stupid.'" We
have stupid radios not because we haven't figured out how to make
them smart but because there's been little reason to make them smart.
They're designed to expect signal to be whatever comes in on a
particular frequency, and noise to be everything on other
frequencies. "The problem is more complex than just making smart
radios, because some of the techniques for un-confusing the receiver
are best implemented at the transmitter, or in a network of
cooperating transmitters and receivers. It's not simply the radios.
It's the systems architecture, stupid!"
One of the simplest examples of an architecture that works was
invented during World War II. We were worried that the Germans might
jam the signals our submarines used to control their radio-controlled
torpedoes. This inspired the first "frequency-hopping" technology:
The transmitter and receiver were made to switch, in sync, very
rapidly among a scheduled, random set of frequencies. Even if some of
those frequencies were in use by other radios or jammers, error
detection and retransmission would ensure a complete, correct
message. The U.S. Navy has used a version of frequency-hopping as the
basis of its communications since 1958. So we know that systems that
enable transmitters and receivers to negotiate do work -- and work
very well.
So what architecture would Reed implement if he were king of the
world or, even less likely, chairman of the FCC?
Here Reed is dogmatically undogmatic: "Attempting to decide what is
the best architecture before using it always fails. Always." This is
in fact a one-line recapitulation of the end-to-end argument he and
his coauthors put forward in 1981. If you want to maximize the
utility of a network, their paper maintained, you should move as many
services as feasible out of the network itself. While that may not be
as counterintuitive as the notion of photons not occupying space, it
is at least non-obvious, for our usual temptation is to improve a
network by adding services to it.
That's what the telephone companies do: They add Caller I.D., and now
their network is more valuable. We know it's more valuable because
they charge us more for it. But the end-to-end argument says that
adding services decreases the value of a communications network, for
it makes decisions ahead of time about what people might want to do
with the network. Instead, Reed and his colleagues argued, keep the
network unoptimized for specific services so that it's optimized for
enabling innovation by the network's users (the "ends").
That deep architectural principle is at the core of the Internet's
value: Anyone with a good idea can implement a service and offer it
over the network instead of having to propose it to the "owners" of
the network and waiting for them to implement it. If the phone
network were like the Internet, we wouldn't have had to wait 10 years
to get caller I.D.; it would have been put together in one morning,
implemented in the afternoon, and braced for competitive offerings by
dinnertime.
For Reed the question is, What is the minimum agreement required to
enable wireless communications to be sorted out? The less the system
builds into itself, the more innovation -- in ideas, services and
business models -- will arise on the edges.
There is active controversy, however, over exactly how much "hand
shaking" protocol must be built in by the manufacturer and required
by law. Reed believes that as more and more of radio's basic signal-
processing functions are defined in software, rather than etched into
hardware, radios will be able to adapt as conditions change, even
after they are in use. Reed sees a world of "polite" radios that will
negotiate new conversational protocols and ask for assistance from
their radio peers.
Even with the FCC removed from the center of the system so that
the "ends" can dynamically negotiate the most efficient connections,
Reed sees a continuing role for government involvement: "The FCC
should have a role in specifying the relevant science and technology
research, through the NSF [National Science Foundation]. There may
even be a role for centralized regulation, but it's got to focus on
actual problems as they arise, not on theoretical fantasies based on
projections from current technology limits."
It's clear in speaking with Reed that he's frustrated. He sees an
economy that's ready to charge forward economically being held back
by policies based on the state of the art when the Titanic sank.
(That's literally the case: The government gave itself the right to
license the airwaves in 1912 in response to the Titanic's inability
to get a clear help signal out.) Key to the new generation, according
to Reed, are software-defined radios. An SDR is smart precisely where
current receivers are dumb. No matter how sophisticated and expensive
the receiver in your living room is, once it locks on to a signal it
knows how to do only one thing with the information it's receiving:
treat it as data about how to create subtle variations in air
pressure. An SDR, on the other hand, makes no such assumption. It is
a computer and can thus treat incoming data any way it's programmed
to. That includes being able to decode two broadcasts over a single
frequency, as demonstrated by Eric Blossom, an engineer on the GNU
Radio project.
Of course, an SDR doesn't have to treat information as encoded sounds
at all. For example, says Reed, "when a new Super-Frabjoulous Ultra-
Definition TV network broadcasts its first signal, the first bits it
will send would be a URL for a Web site that contains the software to
receive and decode the signals on each kind of TV in the market."
But SDR addresses only one component. Reed sees innovation all across
the spectrum, so to speak. He and his fellow technologist, Dewayne
Hendricks, have been arguing for what they call "very wide band," a
name designed to refer to a range of techniques of which "ultra-wide
band" (UWB) is the most familiar. Ultra-wide band packs an enormous
amount of information into very short bursts and transmits them
across a wide range of frequencies: lots of colors, lots of
information. Reed says: "The UWB currently proposed is a simple first
step. UWB transceivers are simple and could be quite low-cost. And
UWB can transmit an enormous amount of information in a very short
burst -- for example, a whole DVD could be sent to your car from a
drive-through, fast movie-takeout stand." Other very-wide-band
techniques, not yet as well developed as UWB, spread energy more
smoothly in time and, Reed believes, are more likely to be the basis
of highly scalable networks.
Given Reed's End-to-End commitment, it should be clear that he's not
interested in legislating against older technologies but in helping
the market of users sort out the technology they want. "Our goal
should be to enable a process that encourages the obsolescence of all
current systems as quickly as economically practicable. That means
that as fast as newer, better technology can be deployed to implement
legacy functions, those legacy functions should go away due to
competition." In other words, you'll be able to pick up NBC's "West
Wing" signal on your current TV until so many people have switched to
the new technology that broadcasters decide to abandon the current
broadcast techniques. "People didn't have to be legislated into
moving from the Apple II. They did it voluntarily because better
technology emerged," Reed says.
But ultimately Reed isn't in this because he wants us to have better
TVs or networked digital cameras. "Bad science is being used to make
the oligarchic concentration of communications seem like a fact of
the landscape." Opening the spectrum to all citizens would, according
to him, be an epochal step in replacing the "not" with an "and" in
Richard Stallman's famous phrase: "Free as in 'free speech,' not free
as in 'free beer.' Says Reed: "We've gotten used to parceling out
bits and talking about 'bandwidth.' Opening the spectrum would change
all that."
But surely there must be some limit. "Actually, there isn't.
Information isn't like a physical thing that has to have an outer
limit even if we don't yet know what that limit is. Besides advances
in compression, there's some astounding research that suggests that
the informational capacity of systems can actually increase with the
number of users." Reed is referring to work by researchers in the
radio networking field, such as Tim Shepard and Greg Wornell of MIT,
David Tse of UC-Berkeley, Jerry Foschini of Bell Labs, and many
others, as well as work being carried out at MIT's Media Lab. If this
research fulfills its promise, it's just one more way in which the
metaphor of spectrum-as-resource fails and misdirects policy.
"The best science is often counterintuitive," says Reed. "And bad
science always leads to bad policy."
- - - - - - - - - - - -
About the writer - David Weinberger is the coauthor of "The Cluetrain Manifesto" and the
author of "Small Pieces Loosely Joined."
Here is a good story - well written and very clear - from camera-care.com covering Carver Mead's vision, and the technology behind the Foveon X3 imaging chip.
Echoing his famous dictum: "Listen to the technology....what does it tell you?" - mention is made of his other insights and patents and businesses based on emulation of "nature's" analog designs in the human body. "Neuromorphic electronics" it's now called.
Here comes a digital-camera chip that could change everything
By Eric Levin
"It's easy to have a complicated idea," Carver Mead used to tell his students at Caltech. "It's very, very hard to have a simple idea."
The genius of Carver Mead is that over the past 40 years, he has had many simple ideas. More than 50 of them have been granted patents, and many involved him in the start-up of at least 20 companies, including Intel. Without the special transistors he invented, cell phones, fiber-optic networks, and satellite communications would not be ubiquitous. Last year, high-tech high priest George Gilder called him "the most important practical scientist of the late 20th century."
"Nobody," Bill Gates once said, "ignores Carver Mead."
Digital cameras have relied on image sensors that can't do what color film does: record all three primary colors of light at each point in the image. Instead, each light-sensitive point in the sensor measures just one colorÑblue, green, or redÑand complicated software in the camera calculates the missing colors. Foveon's breakthrough X3 chip solves the problem with a three-layer design that captures red, blue, and green light at each point. To demonstrate quality differences, the monarch butterfly on this page was photographed with three cameras: an $1,800 Sigma SD9 with an X3 chip; a $300 Nikon Coolpix 2500; and a $2,300 Nikon 35 mm F5 film camera. Insets show magnified detail from each camera's image.
And now one of Mead's simplest ideasÑa digital camera should see color the way the human eye doesÑis poised to change everything about photography. Its first embodiment is a sensorÑcalled the X3Ñthat produces images as good as or better than what can be achieved with film. That would make the X3 the most important advance in photography in nearly 70 years, but the long-term implications are even richer. In a year or two, you will be able to pack a true hybrid camera on vacation. It will take high-resolution stills, or upon the flip of a switch, it will take full-length, full-motion video far exceeding the capabilities of present-day hybrid cameras. In the long run, X3 technology could even make cell-phone video sharp enough to project onto a big-screen TV, which would make dandy travelogues to send back to the folks at home, or enhance collision-avoidance systems in automobiles, or improve robot vision.
X3 is the latest and most innovative product from Foveon Inc., the Silicon Valley digital-imaging company that Mead, 68, founded in 1997. Named for the fovea centralisÑthe part of the human retina where vision is sharpest and most color perception is locatedÑFoveon took as its mission another radically simple idea Mead loves: "Use all the light."
Don't cameras already use all the light that enters the lens? Film cameras do, but digital cameras, with few exceptions, don't. As Mead puts it, "They throw away two-thirds of the light." That makes sense only if you understand how a typical image sensor works. It's basically a rectangle of silicon on which millions of microscopic light-sensitive pixels (technically they're not pixels, but that's what these light-sensing points have come to be called in the digital-camera business) are arranged in a grid. Pixels can't sense color. So a checkerboard of tiny red, green, or blue filters must be bonded to the surface of the sensor so that each pixel lets in one of the three primary colors of light. In so doing, it blocks out the other two.
By comparing each pixel's single-color reading with that of its neighbors, software can derive the values of the two missing colors at each site. That takes approximately 100 calculations per pixel. In a four-mega-pixel camera, a size commonly available today, that adds up to a lot of number crunching. The process is called interpolation, and Mead has a less kind name for it.
"It's a hack," he says. "They have to do all this guesswork to figure out what they threw away. They end up with a lot of data, but two-thirds of it is made up. We end up with the same amount of data, except ours is real."
That is because X3 does what until now only film has been able to do: in one exposure, on one image plane, measure all three primary colors of light at every point on the picture. By doing so, it does away with the bugaboo of so-called mosaic sensors, which often guess wrong, especially at the edges of complicated patterns, introducing moirŽ effects and jagged color errors called artifacts.
Sensing all three colors at each pixel sounds simple, but more than one industry analyst has described it as "the holy grail" of digital photography. "Engineers have been trying to solve this since the earliest days of digital imaging," says Alexis Gerard, publisher of The Future Image Report. Phil Askey, whose exacting equipment tests on his Web site, dpreview.com, are must reading in the trade, says, "This could be the first sensor to truly surpass film."
Carver Mead says, "The eye itself has taught us that remarkable things can be accomplished by building intelligence into the image plane. An intelligent image plane gives you higher quality photography with less demanded of the user."
The only camera to contain an X3 sensor now is called the Sigma SD9, a single-lens reflex with a price tag of $1,800 (not including the lens). But about this time next year, point-and-shoot cameras should be available from other manufacturers with X3 technology. They will have chips with slightly less than half as many pixels as the chip in the Sigma and sell for about $500. To be sure, Foveon will not find it easy to elbow its way into a market heavily committed to existing technology. But it has some influential advocates, including Microsoft's Gates.
X3 is based on a well-known property of silicon: It absorbs different wavelengths, or colors, of light at different depths from its surface. A standard wafer of pure crystal siliconÑthe polished disc, five to eight inches in diameter, on which most microchips are madeÑis about 1/25 of an inch thick. The absorption of visible light takes place within 1/10,000 of an inch of the surface. If you think of that 1/25 of an inch of silicon as if it were a place in the ocean where the water is 1,000 feet deep, then all the light absorption would be taking place within two or three feet of the surface. At that scale, a human hair would be about 50 feet thick.
What Foveon has done is imbed a sandwich of three light sensors within that first 1/10,000 of an inch. How they do it is a guarded trade secret, but the principle is basic physics. Blue light, which has the shortest visible wavelength, about 1/50,000 of an inch, is absorbed closest to the surface. Green light, which has a longer wavelength, penetrates a little deeper. Red light, with the longest wavelength, about 3/100,000 of an inch, burrows down farther before it is absorbed. As the photons strike the silicon atoms, electrons are released. These create electrical charges that the sensors measure.
It took film almost a century to figure out the best way to do color. But once Kodachrome was perfected in 1935, competing schemes largely faded away. By devising the simplest and most reliable solution to the problemÑa three-layer emulsionÑKodak won the color war. Since then, color film has undergone many refinements, and other companies have grabbed significant market share. But the three-layer emulsion is still gospel.
"Exactly the same thing is going to happen with electronic capture," says Mead. "X3 is going to be the surviving image-capture technology. There's no question about it."
If the rest of the industry isn't quite ready to throw in the towel, it's because the technology that Mead is up against has been king of the hill, despite all its flaws, almost since it was invented.
If you have ever bought a video or digital camera, you have probably seen the letters CCD stamped on the box. They stand for charge-coupled device. Versions of the CCD were invented independently at Bell Labs and Philips Electronics about 33 years ago. They produced much better images than could be obtained with other kinds of solid-state sensors, and they took off. Specialized long-exposure CCDs have long proved invaluable in astronomy, and the CCD put video cameras and digital still cameras under countless Christmas trees.
If the CCD was the hare of the digital-imaging race, something called CMOS, for complementary metal-oxide semiconductor, was the tortoise. CMOS is a sophisticated process developed in the 1960s that produces chips with many transistors. Without it, X3 would not have happened. Although CMOS chips didn't make good images at first, they made terrific microcircuits and became the backbone of the computer revolution. CMOS was the technology Gordon Moore, the first president of Intel, had in mind when he made his famous 1965 predictionÑsubsequently known as Moore's lawÑthat transistor density on a chip would double every year. Before he refined his prediction, changing it to doubling every 18 months, Moore consulted with an expert in circuit miniaturizationÑbrilliant young Caltech professor Carver Mead.
Most sensors in point-and-shoot digital cameras are smaller than a fingernail, but the X3 chip is closer to the size of a frame of 35 mm film. So are the sensors in other top-of-the-line digitals, but the X3 is complex to manufacture because of its three layers and its transistor density. "If you were to buy a Foveon chip," says Chris Joyce, director of process technology at National Semiconductor, "you could deprocess it and figure out what we've done. But you wouldn't be able to figure out how we've done it."
Mead has been fascinated by electricity since he was a child in Big Creek, California. His father worked in a hydroelectric plant that supplied power to Los Angeles, so Mead got to watch as new generators were brought online. He was awestruck by the immensity of the machinery and the force that turned it. He earned his bachelor's, master's, and Ph.D. degrees at Caltech, and became a renowned professor of engineering and applied science. With Caltech's Richard Feynman, the Nobel laureate, and biophysicist John Hopfield, he designed a course in 1986 called The Physics of Computation, which became an instant classic, and its principles are still taught by its creators' academic descendants.
In 1969, Mead took Moore's law to a new level. Miniaturization, he said, would make it possible to build very-large-scale integrated (VLSI) circuits. They weren't large in sizeÑin fact, they became smaller and smallerÑbut they incorporated more and more transistors on a single chip as the transistors themselves shrank. Mead predicted that transistors would eventually get to be as small as six-millionths of an inch across, or .15 micron. (Coincidentally, that is the size of the transistors in today's state-of-the-art Pentium 4 microprocessor, made by Intel. Foveon's X3 is right behind it, at .18 micronÑthe smallest transistor in an image sensor.) Mead laid out an innovative set of concepts for designing VLSI circuits, which became the industry standard as his students went out into the work world. Dick Lyon, now Foveon's chief scientist, was one of those students. "Carver taught a generation of engineers how to cope with the complexities of millions of transistors on a chip before anyone believed there would ever be millions of transistors on a chip," Lyon says.
In the 1980s, Mead turned his attention to an even more complex and ingenious kind of circuitry: the human nervous system. It is clearly the most successful computing system of all time, and it is an analog, rather than a digital, system. That part intrigued Mead the most. Instead of registering information in digital strings of ones and zeros, an analog system, such as a retina or an ear, measures a continuum of values. Digital systems in the technological world were quick and complex; analog systems were slow and simple. Mead thought he could learn a lot by studyingÑand attempting to model in siliconÑthe natural masterpieces of human analog design.
This was such a new field, it had no name. So he gave it one: neuromorphic electronics. And it grew into businesses. His study of the cochlea was instrumental in the founding of Sonic Innovations, a digital hearing aid company. Synaptics, which he cofounded in 1986, created the laptop TouchPad. Foveon was spun off from Synaptics and National Semiconductor, a chip manufacturer, to pursue the ideal of a digital image sensor that would see in full color, like the human eye.
A slender, energetic man with a trim Vandyke beard, Mead has a disarming, gentle speaking voice and a face that recalls both David Carradine in Kung Fu and Richard Kiley in Man of La Mancha. He rarely raises his voice, but when something gets under his skin, you'll know it. "The more I learned about human vision," he says, "the more it was clear that what these mosaic sensors were doing was introducing artifacts into the image. It was one of those things that becomes so massively annoying that after a while you think you ought to go do something about it. It was clear that the way image sensors worked was brain-dead. I talked to a lot of people, and nobody got it. So I finally said: 'That's not a problem. That's an opportunity.'"
Among Mead's many talents is putting together a brain trust. From National Semiconductor, he recruited an iconoclastic and fertile-minded inventor and electrical engineer named Dick Merrill to be his design wizard. His choice for chief scientist, Dick Lyon, had worked with him on several neuromorphic projects. Mead's artificial-retina project at Caltech had benefited greatly from ideas Lyon had developed in inventing the optical mouse at Xerox's Palo Alto Research Center in 1980.
At the outset, X3 wasn't even a pipe dream. Foveon's first cameras were based on patented prisms that split incoming light into its primary colors, directing the light into three separate sensors. The system produced extraordinary images but was cumbersome (the camera had to be attached to a laptop) and costly. So one of Lyon's first tasks was to sift through a pile of ideas that Merrill had written and see if any of them seemed promising.
One did. It proposed a way to build a sensor by taking advantage of the light-absorbing qualities of silicon. Merrill had hit upon this method by accident. Back at National, he had built a sensor that didn't separate color the way he expected it to. "So, not knowing much about sensors, I decided to look into why that was the case," Merrill recalls.
Then he pretty much forgot about it, until the day Lyon came by asking if he really thought it could work. Merrill thought it could, so they took it to Mead, who was skeptical. It turned out Mead had experimented with exactly that concept at Caltech and found the color separation that silicon achieved was too indistinct to be useful for photography.
Merrill and Lyon "massaged it around," as Lyon says, and brought a revised design back to Mead, who was encouraged. They conducted simulations, but the color separation was still fuzzy. "We were tearing our hair out about where to go with that," Lyon recalls, "when Merrill came up with a much more radical structure for how to make the thing."
"Once we worked through the science of it, and Merrill and Lyon convinced me it was fundamentally sound, I've never doubted," Mead says. "We basically bet the company on it."
Now the ball was in Lyon's court. When he got to Foveon, he realized that the one expertise the company was missing was optics and color theory. "So I basically taught myself from books what I needed to know," he says. Lyon's colleagues marvel at his polymathic abilities, not to mention his large collection of slide rules, including a working seven-foot-long Pickett that he hung above the bookshelves in the company library.
Lyon believed he could find a way to use the soft color separation in silicon advantageously. The reason he thought so, as he reminded the others, is that "the eye itself does it that way. The color sensitivities of the cones in the retina are not sharply defined. On a graph, they would look like smooth overlapping curves."
In Merrill's new architecture, the top, or blue, sensor would pick up a little green and red light as those rays of light passed through. The middle, or green, sensor would pick up the leftover blue and a little red, and the bottom, or red, sensor would pick up the leftover green and the last vestiges of blue. All digital cameras transform the raw measurements that come off the sensor into a set of numbers called a matrix, which allows a computer monitor to display them as images.
The X3 needed much bigger matrix numbers than most sensors, and that made highlights and shadow areas harder to render. "Even with film, it's a challenge to produce good color in highlights and shadows," Lyon says. "Professional photographers know how to light a scene to work around that. But the issue for us was getting good pictures when shot by amateurs, where the lighting is really challenging. That's what kept me up at night."
Mead's goal of "making the image plane more intelligent" hinged on more than measuring all the colors. The vision was of a sensor that could be used to create video as well as stillsÑand, unlike current cameras, do both wellÑand eventually take on tasks such as focus and exposure, now handled by expensive supplementary chips.
There is no way a CCD can do all that. It has its hands full merely schlepping the electrical charges from pixel to pixel in a process that is usually described as a "bucket brigade." By applying a positive voltage to the negatively charged electrons in the pixel, the CCD attracts the electrons and essentially hands them off from one pixel to the next. There are different patterns, but typically all the pixels in the bottom row of the array are shuttled off, then the charges from the line above drop down, something like a row of blocks in a game of Tetris. It's not only a relatively slow process, but applying all those successive voltages also makes CCDs power hogs, which is one reason users of digital cameras have to replace or recharge their batteries so often.
So this is where the tortoise re-enters the story. Only about half the surface area of any given chip is actually devoted to light sensing. The rest contains supporting circuitry, which in a CCD mans the bucket brigade. By using state-of-the-art CMOS technology, Foveon was able to pack more transistors into each pixel, which enables the chip to do more things. Equally important, CMOS architecture allows the camera software to communicate directly with the pixels and read them out in parallelÑmeaning all at onceÑrather than serially, one after the other, as in a CCD.
The greatest of all parallel processors is the human nervous system. The signals generated by the rods and cones in the eye's retina, for example, don't stand in line waiting to get into the optic nerve like cars backed up at a tunnel. "They're actually processed rather extensively right in the retina," says Mead. The brain samples the signals in parallel, taking information as it comes.
"The nervous system always operates on partial information," Mead explained in an interview with the technical journal EE Times. "Its basic assumption is that you start with no information, and anything beyond that is something, and it is a heck of a lot better to have something rather than nothing. But what the digital paradigm is based on is, 'I've got to have all my arguments before I start any operation'Ñthe opposite end of the spectrum!" The significance of X3 is that it advances the art of parallel processing and brings us closer to an age of truly smart machines.
CCDs are dinosaurs, and it isn't only Carver Mead who thinks so. A 2002 report by the market research firm In-Stat/MDR projected that production of CMOS sensors would draw even with CCDs in 2004 and pull away thereafter. Much of the growth will be in the lower end of the marketÑin cell phones, Webcams, and toys, which are already popular in Japan. But Foveon was encouraged last year by Canon's introduction of two high-end CMOS cameras, both of which use a filter mosaic. "They've made the world safe for CMOS," quips Lyon.
Dinosaurs, of course, ruled the earth for a long time. CCD makers have managed to develop a number of ingenious processing strategies to compensate for the system's inherent weaknesses. And they've managed to bring down prices steadily while increasing pixel count. Even though there's more to producing good images than stuffing greater numbers of smaller and smaller pixels into a camera, smart marketing has established the mega-pixel as the sexy equivalent of horsepower.
Mead has another paradigm in mind: "Whenever a radically new technology has developed, a new major company has come out of it. When the transistor came along, we got Texas Instruments. When the integrated circuit came along, we got Intel. When we got microprocessors and personal computers, we got Microsoft.
"That's the way I see Foveon. It doesn't mean we're going to put the others out of business. We have no intention of doing that. They're becoming our customers. We're forming alliances.
"We're not going to be an Apple," he adds. "We're not going to turn ourselves into an island. We're going to be more like a Microsoft or an Intel." It's amazing, come to think of it, how much Mead does look like Don Quixote.
Ê
ORIGINALLY PUBLISHED ON DISCOVER: Vol. 23 No. 12 (December 2002)
Ê
------------------------------------------------------------------------
RELATED WEB SITES:
Foveon's Web site, which includes visual depictions of how the X3 chip works, can be found at Foveon.com
Phil Askey's Web site has tough, exacting reviews of digital cameras, as well as an excellent glossary of digital camera terms: DPreview.com
For tutorials and information on digital imaging of all kinds, go to ExtremeTech.com