So, not surprisingly it turns out that NVidia does understand that they have some problems in regards to the probable extinction of the discreet GPU in the X86 space. To recap, both of their main competitors, Intel & AMD are general purpose processor manufacturers first and GPU vendors second. The trend in CPU's has always been to integrate formally external components such as floating point processors onto the CPU. For cost and performance reasons this just makes sense; and if that happens, odds are good that NVidia will have a very hard time continuing to provide competitive product since there will be much less motivation for either AMD or Intel to do the kind of engineering needed to support discreet GPU's.
So, what are NVidia's options here? The most obvious one is to create or acquire their own 64 bit X85 compatible processor. The first of those two options is fraught with danger. Intel is a company that takes intellectual property rights very seriously. You don't want to mess with their lawyers unless you have deep pockets and an airtight case. Such litigation can take years to settle. The second option is much safer, but there aren't really a lot of options. Probably the best is acquiring Via. With a market cap of just over $28 billion as I write this they certainly are affordable. One aspect of the FTC/Intel settlement is a modification of intellectual-property agreements with AMD, Nvidia Corp. and Via Technologies Inc. so that those chipmakers can more easily do mergers and joint ventures with other companies without the threat of a lawsuit from Intel. In my opinion the prospects of both Via & NVidia would be improved by a merger. Consumers would benefit as well by having three credible competitors in this space. NVidia however appears to have other ideas.
Jen-Hsun Huang, NVidia's CEO was recently quoted as saying "Our CPU strategy is ARM," This statement doesn't preclude NVidia acquiring Via but it is worth examining in more detail. Unlike the Intel X86 architecture ARM is readily licensed and costs to do so are modest. ARM processor variants are used in most high end "smart" phones as well as devices such as Apples IPad. The first question that comes to mind when evaluating this strategy is how important is the X86 instruction set likely to be in the future? If you run Windows or recent variants of MacOS, the answer to that question is pretty obvious. Increasingly though the computing devices we interact with do not in fact run either and thus do not require an X86 compatible processor. Given this NVidia's strategy makes some sense. My Droid X is my camera, video recorder, email reader, navigation system, games machine and casual web browsing tool of choice. If it had a keyboard I'd use it even more.
The problem with that strategy is that there tends to be a lot of downward pressure on cost. This isn't a segment where you're going to find high margins for any sustained period of time. We're getting into an area here where I don't actually have a lot of experience or knowledge, but what the heck. NVidia has essentially exited the chipset business. First the acquisition of their main competitor ATI by NVidia essentially closed off that space. Then Intel refused to license newer technologies to NVidia which essentially killed that part of the business. Chipsets likely aren't high margin for the most part because their is a lot of pressure on cost for pretty much every component in a computer. So it seems reasonable to think of NVidia's desire to do more in the ARM space is at least in part intended to be a replacement for their chipset business.
But what about the high margin discreet GPU part of their business? How are they planning on replacing or salvaging that situation? My guess is that they are putting all their eggs in CUDA & high performance computing. In this model the GPU essentially becomes a really fast highly parallel vector processor. If they put enough memory on their GPU cards they can minimize the downsides of not having a low latency/high bandwidth connect to the main memory and CPU. Heck, they could even create their own super computers using ARM processors to do the basic computing tasks, the PCIe bus and a bunch of slots populated with GPU's. The amount of processing power they could get into a single rack with this kind of model would be REALLY impressive. And since most of the processing takes place on the GPU's the processor really doesn't matter much. Linux already runs on ARM so the effort there wouldn't be that daunting.
One of the cool things about the technology field is that change happens really quickly. If you're a business geek this means you don't have to wait around a decade to see how a particular strategy plays out. Generally three years is plenty. I suspect we'll have a good idea as to the long term viability of NVidia in twelve to eighteen months.
Image at top is from Wikipedia.
No comments:
Post a Comment