Recently, Microsoft announced that is has joined the Linux foundation, that SQL Server is now ready for testing on Linux and a preview for Visual Studio on Mac is available. This is a transformation that cannot be underestimated. They reinforce the direction that Microsoft CEO Satya Nadella started a couple of years ago. And they are the right things for the company to do.
Multithreaded programming has been an important part of much of what I’ve worked on in the past couple of decades. However, a lot of my work has been on high-performance server-side technologies where speed is as important as reliability and stability. Taking maximum advantage of parallel hardware technologies such as multicore processors, and synchronous multiprocessing (SMP), is a key part of maximizing performance. For some developers, particularly those starting out in the mobile device space, multithreaded programming can be pretty new. Up until this past year or so, mobile devices used a single processor. So, other than cycle-stealing or avoiding situations where you locked up the UI for extended periods, there wasn’t all that much to be gained by writing apps with multiple threads. The rise of dual-core processors (and the forthcoming quad-core processors) has made multithreaded programming more attractive, simply because there are speed advantages to be had.
To that end, I thought it would be useful for some people to have a lightweight introduction to multithreaded programming. This first primer is one of a series I hope to build up over time (my next primer will probably be on iOS observers. I get a lot of hits everyday on an early post I did on that topic). Keep in mind that the goal of this primer is not meant to be exhaustive. There are better works available to treat the subject in far more detail. My goal is to provide someone with a foundation on multithreaded programming, and enough background that they can be productive in it and will have the grounding they need if they want to explore the topic further.
The link to the primer is here. You will need some kind of PDF reader to be able to read the document. Hopefully it someone finds it useful.
A recent post on Android Guys (a very good site for Android news and information, BTW, I highly recommend it), tries to imply that moving to multicore machines for phones and tablets is a big deal. The article covers very old ground as to the benefits of a multicore processor. I’m just not sure it’s as hard or as ominous as the author tries to make it out to be. Why? The heart of Android and iOS are kernels that already run successfully and efficiently on multicore and multi-CPU machines today. Android is effectively Linux, albeit somewhat stripped down, but the heart of the system is still there. iOS is basically a mobile version of MacOS, which itself is a variant of BSD. Both of these run quite well on multicore machines (or they better, given that my Macs and Linux boxes are all multicore). I’m not sure how much of the Windows kernel made it into Windows Phone 7, but even Windows 7 is a pretty decent system for multicore/multi-CPU setups.
And it isn’t as if multicore/multi-CPU architectures are new here. Commercial operating systems have working with SMP-enabled servers for many, many, many years now. Multicore didn’t change the dynamic that much, it mainly changed the packaging. Dealing with thread-safe memory management, thread-save device access and thread scheduling is a well-known and well understood problem, with the basic research going back more than 2 decades. This isn’t exactly new stuff we are talking about here. There is a generation of software developers who have spent their entire career using nothing but multicore machines.
Will it be a big deal for app writers? No, not really. I’ve already done a bunch of work with multithreaded apps on both iOS and Android (and one experiment I’m working on explicitly depends on it). Again, designing, coding and maintaining multithreaded apps isn’t exactly new. It is a well-understood problem that many, many people are already familiar with. Putting a multicore machine under the covers will potentially let those apps run better, but the concepts and principles remain unchanged.
Does the impending rise of multicore mobile devices change things? A bit. It brings more processing power into our pockets, and it may help reduce electrical power consumption. It may also make it easier for people to write far more powerful apps, but at the same time make it easier for
lazy some people to write apps poorly but still perform well. From what I’ve heard, QNX and RIM’s PlayBook depend on it to get their performance. Rumour has it the upcoming iPad 2 (or whatever Apple calls it) will include it.
In the end, this development isn’t entirely a surprise and shouldn’t be unexpected. It also isn’t something to be feared or for people to be worried about. It is an inevitable technology change that was going to happen at some point, it just may be happening sooner than people expected or thought.
I see that, while server sales overall dipped last quarter, Sun was hit the biggest. This is not surprising, given that Sun has been diminishing in relevance in the server market for the past few years now. With the rise of Linux and commodity Intel servers, and the decline of vendor-specific UNIX implementations (like Solaris, HP-UX and AIX), Sun has been the one hit hardest, while Dell, HP and IBM have benefited. Unfortunately, this starts a bit of a death-spiral for Sun: reduced revenue and profits is less money available for R&D and changing the company’s direction, which means even fewer sales, less revenue, etc.
It isn’t like the industry shift was a big surprise for Sun. They saw it coming, and were starting to move toward commodity-level Intel servers while still offering their SPARC-based machines. IBM was doing the same with their Power line, building both Intel-based servers and Power servers (running AIX or Linux, or both with their VM technology). IBM also continued with AIX, OS/400 and zOS, adding Linux to their stable of offerings. However, HP was moving away from their collection of different CPU’s (MIPS, PA-RISC, HP3000) to Itanium and Pentium/Xeon, basically outsourcing all their CPU effort to Intel and focusing on the packaging and the software. HP also retired a number of operating systems, to simplify it down to HP-UX and Linux. Dell stuck with Intel-only. All of them offered Windows-based servers as well.
So why has Sun been hit so hard? Its not like their strategy was radically different from IBM (offer their own OS, along with Linux). I think that Sun was hit by two things. First, while they offered Linux as a solution, they didn’t have the same level of experience or facilities for building commodity hardware that vendors like IBM, HP and Dell have. While IBM and HP have long and deep experience with proprietary systems, they also have a lot of depth building generic Intel-based machines. Dell’s bread-and-butter are commodity machines. All 3 have a lot of experience at leveraging vendor streams and driving out costs. Sun has some, but I’m not sure they had nearly as much. Sun also has a very different culture than HP, IBM and Dell. They tend to get caught up in the technology and the “coolness”, and aren’t as geared to “design it fast, build it cheap” as vendors who are used to high-volume/low-margin selling. It meant that, at times, Sun was more willing to push the edges of things, and go for faster/better/cooler (with some pretty neat stuff as a result), but when you rely on volume, volume, volume, that isn’t as important.
The second thing that hurt Sun was the “feedback loop” that bad news can cause: Sun isn’t doing well, so people start to move away from Sun, so Sun does worse, causing more people to abandon the brand. Rinse and repeat. There probably was a point where Sun might have been able to recover, but it appears they are past the point of no return. I’m not sure that the Oracle deal will change that, assuming it will eventually go through. The uncertainty about the Oracle-Sun deal simply contributes more ballast to a sinking ship, driving it down further. I know its not statistically significant, but when I look at new server projects I’ve been involved with, and others I know about, very few (if any) have Sun on the list of server vendors to consider. Again, looking at projects and installations I know, those that have Sun gear have been replacing it with IBM, HP or Dell more often, not with new Sun equipment. I don’t know for sure this is the larger trend, but I wouldn’t be surprised if it is.
So what happens to Sun? The path will depend on the results of the Oracle deal, but I think the ultimate destination is the same. If the Oracle deal happens, we could see some systems dedicated to running Oracle, but over time, I would expect to see Sun continue to be removed from the server room, perhaps at a slower pace. I’m honestly not sure what Oracle thought they were buying, but by being a hardware vendor, they are competing directly with their “partners” in HP and Dell, and further encouraging IBM to find a way to push DB/2. Other than owning Java and getting their hands on MySQL (which it turns out they may have to part with), there’s not that much left of value inside Sun.
Without the Oracle deal, I think Sun’s exit from the server room will accelerate over time, leaving Sun primarily with Java and MySQL as the only technologies worth anything. Solaris will become another “me too” UNIX variant with a niche base.
Either way, Sun will be done as a server vendor in the medium-to-long term. Its sad, because Sun used to be one of the leaders. They were an important part of the rise of RISC, and the resultant evolution of other CPU architectures in the industry. Solaris was a benchmark for operating systems in many ways. I think that only a company like Sun could come up with Java, or could truly appreciate MySQL. But being technologically “better” doesn’t always translate into long-term market success, and it will be unfortunate to see them go.
I read an article on MoneyCentral where the author ponders if Apple is blowing its chance to “dominate the market” in PC’s and smart phones. On PC’s, his assertion is that the Windows Vista debacle was a opportunity for Apple to sweep in and push Microsoft aside. On smartphones, he implies that Apple had a chance to dominate a green field.
From where I sit, I’m not sure the author really understands either market. On PC’s, yes Vista was a disaster, but it didn’t affect the largest and most profitable segment of PC buyers very much: corporations. Many, many corporations continued to buy PC’s running Windows XP, and Microsoft had to continually extend support for it to keep those customers happy. The reality is that, no matter how technologically superior MacOS is, businesses were not going to just dump their Windows machines in favour of it. They have far too much invested in terms of software, infrastructure and employee training/familiarity to abandon it. This isn’t like switching brands of toasters or microwave ovens. Switching PC operating systems is a significant undertaking. The Mac was able to make small in-roads in some businesses, but Apple has not focused on the Mac as an enterprise computing platform. The opportunity for this on the enterprise side was extremely small, if not largely non-existent. The release of Windows 7 won’t cause corporations to suddenly buy more PCs: they were already buying what they needed (or not buying, courtesy of the recession). The Mac wasn’t on most lists to begin with, and didn’t have a realistic chance of being there, either.
On the consumer side, Apple has had some success. But no matter how much better MacOS could be for the average person, again, Apple is up against the fact that buying a Windows machine (even a Vista-based one) is viewed as the “safe” alternative. It wasn’t about price. The average person doesn’t buy a super-cheap $500 PC, they spend $1k or more, right in the range that low-end Macs are positioned. The PC is “safe” because of the vast library of consumer software available, specifically things like games. Its also “safe” because the computer most people use at work is a Windows-based machine, and most of their computer-knowledgable friends are more likely to know how to deal with a Windows PC. Ultimately, running XP Home, Vista or Windows 7 doesn’t really matter to most consumers. Apple is up against a monumental amount of inertia here, and has done as well as could be expected.
When it comes to smartphones, the author of the article appears out of their depth, and seems to focus solely on the US market. Globally, the biggest player in smart phones are those based on Symbian. In the US, most smartphones right now are corporate phones, and that’s were Blackberry rules. The consumer market in the US is still nascent, and Apple is doing very well there. What Apple is doing is avoiding the sorts of things that have made it hard for other players to sustain their profits on handsets: too many models, too many networks and not enough (if any) profitability per handset. Current handset manufacturers have no effective control on what they build: they are essentially told by the networks what to build, and it results in them having to have far too many variants on handsets, with the attendant costs associated with them.
Apple is trying to change that formula by taking control. By limiting their network providers initially, Apple got a chance to “test drive” the market in a controlled environment while keeping their risk and costs low. It appears Apple is going to add other network in the US. They’ve already done that here in Canada, since you can now get the iPhone on Rogers, Fido, Telus and Bell. Apple is moving in the same direction in the much larger mobile phone markets in Europe and Asia. The exclusive AT&T deal wasn’t forever. Apple’s success allows them a measure of control and leverage over other networks. This means they can say “no” to overly-customized variants of the handset that will only work well on a specific provider’s network. That in turns keeps costs down and margins higher.
Google, on the other hand, is trying to go down the road Microsoft has already travelled. Microsoft build Windows Mobile, and left it up to the device makers to do something with it. What Google is doing differently is, first, not expecting a significant revenue stream from Android on handset sales, and second, actually maintaining the operating system consistently. The risk for Google, though, is their loss of control to the handset makers. Some handset makers want to change the look and feel quite drastically (Samsung is apparently looking at this), and the risk is that it won’t look or act like other Android phones. The OS will be a commodity, and could be marginalized to the point people don’t care about it. This means Android’s success is completely at the mercy of the handset makers, and Google has no effective control of the brand. As a result, Google is hedging their bets on this, making sure their software is available on other smartphones as well.
Either way, it is far too soon to tell who is going to dominate the US marketplace, and if Symbian will be supplanted as the leader globally. Apple is doing well and RIM is making in-roads on the consumer side, by leveraging their position on the enterprise side. Android should do well, but the risk there is the handset makers marginalize it and don’t buy into it exclusively. I expect Android to be around, but not necessarily as a well-known individual brand in the minds of the consumer (like Symbian is today).
The last element of the article is the author’s dismissal of Apple’s stance that marketshare doesn’t matter, but being profitable does. On this one, Apple has it right. My only addition would be to say that it should be sustainable profit. Marketshare doesn’t pay the rent, profit does. Why should Apple attempt to bankrupt themselves to try to dominate the PC market? Particularly a market where the entrenched player (Microsoft) is there not because of superior product, but because they are the defacto standard on the business side, and that spills over to the consumer market. For Apple to dominate, they would have to go after the enterprise market in a huge way, and that is a hard sell. That isn’t to say it couldn’t happen. The mainframe dominated the business computing landscape for decades, but was eventually supplanted by UNIX, Windows and now Linux based servers. Changes in the computing landscape are possible.
Going back to a previous post I wrote (The End of Windows?) I believe that the next big battleground is smartphones, or more specifically, high-powered highly-mobile computing. The mainframe didn’t die (IBM still sells billions of dollars worth of them). It was overshadowed by other server computing. I don’t think the PC will die, but it will change form as more and more people rely on computing that fits in their pocket.
So, has Apple squandered some opportunity to dominate PCs and smartphones? Given that smartphones for consumers are still quite new, I don’t think Apple has missed an opportunity there. That battle is just beginning. For PCs, I don’t believe that there was an opportunity to begin with, as described by the article’s author. I think Apple (and Linux) would still have made the gains they have made. The Windows juggernaut may have stumbled a bit with Vista, but not enough to harm it as much as the author implies, because the real fight for the PC desktop is the enterprise, and Vista isn’t the competitor there, XP is. Ultimately, Windows has far too much momentum behind it to be replaced in a matter of a few years.
Will Windows ever be supplanted as the prominent operating system on the desktop? Is there an “end” for Windows? Hypothetically, it is possible. Virtually no technology has an unassailable position. The very few exceptions are things like 110 volt 60 Hz outlets in Canada and the US (and 230V 50Hz in other places) and lightbulb sockets. Examples of dominant technologies and business models being replaced are everywhere. Outside of computing, propeller drive aircraft gave way to jets. Analog TV was replaced in the US with digital broadcast TV. Cable-driven excavators were largely replaced by hydraulic equipment. Catalog sales by mail were replaced by catalog sales by phone, which is being replaced by on-line ordering via the Internet.
In computing, mainframe computers, were supplanted, or more correctly supplemented and eclipsed, by UNIX and Windows servers. Proprietary hardware from a limited selection of vendors has faded into the background, to be replaced by commodity hardware from multiple manufacturers. MS-DOS gave way to Windows, UNIX is slowly slipping back behind Linux, and Internet Explorer has given ground to Firefox.
So too, over time, will Windows give way to other operating systems. Windows has already started to slip in market share, losing a little bit of ground to MacOS and Linux. In almost all of these cases, though, the old technology never really disappeared, and most of them are still around today (we still have mainframes, prop-driven planes and cable-driven excavators). What happened is that they lost their position as the dominant technology in use, to have people use other forms of technology to do what they needed to do. I do expect that Windows will, over time, find itself holding less of a prominent place in the computing world over time.
However, conventional operating systems like MacOS or Linux are, in many ways, “more of the same”. They don’t represent a fundamental shift in how people approach computing. They simply present 2 other large operating systems as alternatives to Windows. And, like Firefox to Internet Explorer, they are finding some ground. But I’m not sure they are better “enough” to replace Windows, or at least take over a significant share of the desktop and end-user computing space. The potential is there for MacOS to gain further ground, potentially in the enterprise space when Outlook is available, but I’m not sure we would see it dominate the desktop the way Windows has to date. What would be different enough is to have computing and end-user information always available, at any time, for users to access and use, and it is this type of shift that I think will reduce the dominance of Windows on end-user computing.
There have been many, many attempts at having a way for people to “take their information with them”. Xerox and the Altos workstation, with its dedicated network, was a way for people in an office to sit down at any Altos terminal, log in, and get “their” desktop with their files and data. Something close to this is available with Solaris and dynamically mounted NFS filesystems that can give you your environment on any machine in your network. The original vision for the NeXT was for you to use a removable disk that contained your data and your environment, including your software. You sit down at a computer, plug in your disk, and get to work. Everything you need fits in your hand.
Docking stations and notebooks are the current iteration in that direction. You can have a large, multiscreen environment with a dedicated keyboard and mouse in your office, a smaller setup at home, and all you do is take your notebook back and forth. Compaq and Apple started this with some very early subnotebooks and dedicated docking stations. Many business laptops from Lenovo and Dell offer docking stations that provide this same type of functionality today.
These various solutions have problems. The “network as computer” in the Altos and Solaris were limited by the available technologies at the time they were conceived. Since they required specialized hardware and software, and needed large available bandwidth, it didn’t scale up outside of an office environment. The NeXT concept was closer, since the device was quite portable (although not quite pocket-sized), but was burdened with limits in the technology at the time, primarily speed, and was also limited to only working on NeXT devices. The dockable notebook, while more functional and ‘scalable’ in that it is a standalone computing device you can use anywhere, still results in (for the most part) bulky devices that you have to cart around.
Cloud computing and browser-based services like Google Docs take the “network as computer” model and scales it better, since the passage of time has resulted in standards for browsers and browser environments. Our available potential bandwidth has also increased over time. The downfall, though, is availability and questions on security of your data. Some web-based services only work when you are connected to the web, and don’t work when you happen to be offline, and for business travelers, this can be an issue. However, this type of computing means that it doesn’t matter what operating system you run. As long as you have a browser with the right capabilities, you can do your work. This is a notable shift, and could supplant Windows. However, I’m not sure it is fundamental “enough”, since accessing these services is largely done via notebooks and desktops running Windows in the first place. It isn’t getting people to think about computing in a different way, just how they use some of their software.
There are two potential game changers that could throw a spanner into the Windows machinery: netbooks and smart phones. Currently, neither is in a direct position to challenge the supremacy of the desktop/notebook for resource-intensive computing or the enterprise. Their small screens, tiny keyboards and limited computing capability today aren’t there yet. But, what they are doing is getting people to think differently about their computing needs. As more and more people move beyond just using dedicated Internet connections at home or the office, and start to move to “always on, always there” Internet access from nearly anywhere you can sit, stand or lay down, the potential for desktop computing to shift is there. Inevitable advances in technology could allow these devices to become our primary computing platform, and as a result, supplant Windows as the operating system on our “desktop”. They offer a balance of being able to use Internet-enabled services, but also act as standalone computing when the network isn’t there.
Of the two, I believe that the smart phone has the greatest potential, but it also has the steepest hill to climb. While the netbook is compelling, and for some users will make sense as their core computing device, I believe that for a lot of people a powerful smart phone, with a rich set of applications, supplemented by external monitors, keyboards, etc could change the game, and start to move Windows into the background. I believe this because of the size and portability of the devices, which makes it trivial to take the machine with you.
Imagine a world where your computer is in your pocket. All of your key files and other data are available on a device that is with you whenever you need them. Using wireless connections or a dedicated docking station for your phone, you can use normal-sized keyboards and large screens to edit documents and spreadsheets, manage e-mail, and engage in real-time conversations either via voice or text messaging. When on the go, you have this little device that can get you your e-mails, allow you to search for information, browse documents, and engage in other forms of communication. It can inform you and entertain you. You can still do some things with it, even when you have no network connection available.
Right now, Blackberries, iPhones and Android phones can provide anywhere from 70-90% of this, depending on your needs. I had occasions recently where all I had was my iPhone. I discovered that I can use an iPhone in its current form to do about 80-90% of what I need to do, leaving out programming and heavy-duty document editing tasks. As these devices get more powerful, and get more storage and better battery life, I could see them being useful for the bulk of the office tasks that I do today using dedicated desktop and notebook hardware. I don’t believe I am atypical in this case. If I could get the iPhone connected to a bigger screen (with more pixels) and a real keyboard, I could probably do almost everything I need to except for programming.
Will this work for everyone? Of course not. There are some things that are simply too resource intensive. Gaming, writing and testing software, computer design and engineering simulations come to mind right off the top. But, just like no one motor vehicle can “do it all”, I’m not naive enough to think that a souped-up smartphone will be able to do it all either. But I don’t believe that smartphone technology will simply stay put, and can see it evolving and improving over time.
How close are these devices? I believe they are maybe halfway there. The three shortfalls I see are CPU power, storage, and battery life. as always, technology marches forward, and these will be overcome. I don’t expect it to be tomorrow or even in the next year or two. But they will improve.
Ultimately, for Windows to fade away, it will take a fundamental change in how computing works. I don’t see it being replaced by another variation of a desktop OS like MacOS or Linux, simply because they aren’t better “enough” to compel a change of that magnitude. I also don’t see Windows, MacOS or Linux disappearing completely: they (or something like them) will have a place on the desktop and notebooks for a long time to come, just as large UNIX servers and mainframes still have their uses. But for a lot of what people do, an uber-smartphone, with supplemental accessories under some circumstances, could be what eventually moves Windows and similar operating systems to a secondary or tertiary role in day-to-day computing tasks.
According to my friends in IBM, this win for them is based on work me and my colleagues did for them for 2003-2005 (while we were active with AMSL). During that time, I was working with the IBM Design Center and their global Financial Markets team to help develop the Stock Exchange Reference Architecture (SXRA). Part of this work was help them position IBM technology for use in large stock markets. I also built a prototype of a high-speed, low-latency matching system for cash instruments like equities in Java on both the zSeries mainframe (under zOS!) and for pSeries running Linux, using the messaging technology referenced in the release, along with WebSphere and DB/2. The goal was to demonstrate that, using off-the-shelf IBM technologies, you could build these very fast and very reliable systems for production use.