Red Hat CEO Loves Linux, but sure likes his Apple iPad and iPhone tooBy Sean Michael Kerner | June 29, 2012
From the 'Everyone Loves Apple' files:
When I was at Linuxcon in Vancouver last year, i was accosted by a few people for using an iPad (it's not Linux after all). Apparently I'm in very good company.
Red Hat CEO, Jim Whitehurst is an iPad user too.O h and he's an iPhone user and has a MacBook Air also. The MacBook Air runs Fedora though (and hey Linus likes MacBook Air too!)
Reality is that the iPad is the best tablet there is, for Red Hat's CEO it helps improve his productivity and workflow...with one key caveat.
There isn't a LibreOffice/ODF viewer that he could find that works for his iPad. So if anyone out there has a solution, there is likely a very grateful Linux CEO out there...
Solaris, SCO, Linux, Open Source and Red Hat SummitBy Sean Michael Kerner | June 28, 2012
From the 'Conference Panel Shockers' files:
BOSTON. Vendors and analysts alike tell me people aren't worried about Linux litigation and that Solaris isn't a concern for Linux vendors.
As it turns out, that's not entirely accurate.
I was on a panel today at the Red Hat Summitt in Boston and we got a question about the risk of Linux, from a legal perspective.
Apparently it is still a (small) concern. I did the best I could to explain that SCO is now a zombie (they still exist, but does anyone take them seriously?) and of course every major enteprise Linux vendor provide indemnification. But still, simple truth is it was a concern and a question that led to some conversation.
No this isn't a meeting I was in 10 years ago, this was today, in Boston in 2012.
Also, I made a comment at one point, that most people have likely moved off Solaris for co-location and web-tier type application hosting. Our moderator, IDC Analyst Al Gillen, then asked the audience if anyone was still using Solaris...remember this was an audience at RED HAT SUMMITT, and guess what?
At least 15 hands went up.
So, despite what others might tell you, apparently litigation and Solaris are still out there and they are Linux users (red hat conferencer goers) too.
Red Hat Now has Customer Trials of Open Source OpenStackBy Sean Michael Kerner | June 27, 2012
From the 'that didn't take long' files:
BOSTON. It wasn't all that long ago that Red Hat officially announced that they were joining the OpenStack open source cloud project.
Today at the Red Hat Summit, Red Hat CTO Brian Stevens revealed that Red Hat already has active customer trials of OpenStack.
Steven's sees OpenStack as being a nice complement to the hybrid PaaS solution called OpenShift that is being announced this week. With both platform and infrastructure, Red Hat will have an end to end solution for cloud.
The icing on the cake is CloudForms - a solution that at one time many people (myself included) assumed would be a full public cloud IaaS on its own. CloudForms is really more of an orchestration layer, leveraging deltacloud, so it can manage all clouds - OpenStack or otherwise.
Sean Michael Kerner is a senior editor at InternetNews.com, the news service of the IT Business Edge Network, the network for technology professionals Follow him on Twitter @ TechJournalist. ##
Red Hat CEO: We're at the Dawn of the Information Economy (and Linux is the Sun)By Sean Michael Kerner | June 26, 2012
From the 'open source as a tool of industry' files:
BOSTON. There are keynotes that are little more than product pitches, then there are conference keynotes that educate and inspire. Red Hat CEO, Jim Whitehurst delivered the latter during his keynote kickoff for the Red Hat Summit here today.
Whitehurst gave the capacity crowd of 3,000 plus a history lesson in the economics of industry, going all the way back to the first industry – agriculture. He explained how the Industrial revolution changed the world and drew some very stark parallels with how the modern Information age is now unfolding.
"We've talked about being in an information age for the last 60 years but now we're finally in the information economy," Whitehurst said. "Information assets are now more relevant for the first time and competitive advantage is more driven by information than physical assets."
With the Industrial Age, it began in 1750, but Whitehurst argued that it wasn't until 1810 with the invention of the auto-lathe (a tool that made standardized nuts and bolts) that the true value of industrialization was realized. Prior to that invention, nuts and bolts were not commodities but once they became componentized, innovation led to the internal combustion engine, planes, trains and automobiles.
"60 years after the invention of the computer we are now finally getting to standardized piece parts, what i'd call cloud computing," Whitehurst said.
Whitehurst added that if nuts and bolts could have been patented and you had to buy your screwdriver form the same company you buy your screws, jet engines would not exist today. That’s where open source and standards, much like the autolathe of 1851 are critical.
"I gave a mundane view of the cloud as nuts and bolts but the other half of the story is what you can do with the nuts and the bolts," Whitehurst said. "We're now finally giving standardized parts, so people can do what they want."
Is JavaEE 6 Bloated? Red Hat Doesn't Think So.By Sean Michael Kerner | June 26, 2012
From the 'Bloatware Sucks!' files:
BOSTON. Mark Little, Red Hat's JBoss CTO delivered a keynote at the JBoss summit this evening that really struck a chord.
Little talked about all the problems that Java has yet to solve, better NoSQL work (Red Hat has an upcoming graph based NoSQL database in that area..), better density on multi-tenancy and the need for things like real time deterministic scheduling.
While there are improvements needed in Java - Little took aim at one of the prevailing myths about JavaEE 6
"There is a perception that Java EE is bloated," Little said. "We believe we have shown with JBoss EAP 6 that's not the case. There is nothing in the EE 6 specification that says you have to have a bloated implementation. Bloatware should be a thing of the past. It is possible to have a lightweight Java EE 6 stack."
JBoss EAP 6 was officially released last week and is the Red Hat's flagship Java middleware server. When he's talking about lightweight, Little isn't talking about Web Profiles either, he was talking about the full Java EE 6 setup. To prove his point, Little pointed out that JBoss has been able to get a complete Java EE6 stack running on a Raspberry Pi.
Now that's amazing.
Red Hat Enterprise Linux 7 to Feature BtrfsBy Sean Michael Kerner | June 25, 2012
From the 'Goodbye EXT4?' files:
For the last several years, I've been asking Red Hat when Btrfs would land in Red Hat powered Linux distributions. Now I know the answer.
Tim Burke, Vice-President of Linux Engineering at Red Hat, told me that currently Btrfs is still considered to be a tech preview in the recently releases RHEL 6.3 update. He added that Red Hat is currently focusing its Btrfs efforts on RHEL 7, where the Btrfs filesytem will be a more integrated component.
This is good news.
Red Hat's enterprise embrace of Btrfs has been a bit slower than other enterprise Linux distros. SUSE has been providing a supported Btrfs implementation since SLES 11 SP2, which was released in March of this year. Btrfs provides enhances rollback and snapshotting features over EXT4, though comparative performance could potentially be a problem. If Red Hat is focused on that problem for RHEL 7, we could be seeing Btrfs as a new default alongside Ext (which was the plan all along wasn't it?).
The Btrfs world has had some recent transitions too of course, Chris Mason (founder of Btrfs) has left Oracle - though he remains the leading figure in the Btrfs development world. Though Btrfs started off as an Oracle project, it enjoys the support of a wide number of vendors and developers today - proof positive once again that the open source model works.
**UPDATE** As the GREAT Wim Coekert, Linux Overlord as Oracle, all around great person and Ping Pong player extraordinaire, has reminded me, Oracle Linux also supports Btrfs.
Fedora Linux 18 Set to Redefine the Initial User ExperienceBy Sean Michael Kerner | June 19, 2012
From the 'Improving Linux Desktop' files:
One of the reasons why I've personally long enjoyed using Red Hat Linux and now Fedora is because of ease of installation (mostly Anaconda). It's an experience that is set to get even better in the upcoming Fedora 18 'Spherical Cow' release later this year.
One of the major new features that is set for inclusion in Fedora 18 is something called 'Initial Experience.' The basic idea is to enable new users to have a fully functional desktop out of the box in a very seamless manner. This new Initial Experience will also give users a tour of GNOME 3 so they will know what's what.
"We will provide an improved and smoother initial experience for new users of the Fedora desktop, and let them configure relevant parts of GNOME so that they end up on a fully functional desktop after going through the initial setup," Fedora's feature wiki states. "In addition, the install experience of the desktop spin may be improved."
A great idea and one that is proof positive that Red Hat's Fedora community still cares very much about new users and about the desktop experience as a whole. Going a step further, since many new users don't understand that there are lots of desktop choices (and I personally think GNOME 3 is less than ideal for new users. As such, it might be an even better idea to have some kind of desktop sampler spin where Fedora's new users can try out and sample the best desktop experience.
That idea aside, it's important that Fedora 18 has this new initial user experience. Windows 8's new user ramp up is pretty slick (I tried it out on Virtual Box running on Fedora 17), and it is in Fedora's interest to kick up the Linux initial experience a notch or two as well.
Is Mozilla Abandoning Open Source Gecko for Apple iPad? WebKit Wins.By Sean Michael Kerner | June 18, 2012
From the 'Fennec Fail' files:
For the last four years, every time I've asked a Mozilla person if Firefox was going to come to Apple iOS, the answer has always been no.
Apple's own restrictive policies will not allow another rendering engine and that means that Mozilla's core Gecko rendering engine is not an easy option. Mozilla is now investigating another route, by building a new browser - codenamed junior.
Since Marc Andreeson and Netscape, Mozilla and its forebears have always been Gecko based. Junior will be the first Mozilla browser tech to use WebKit.
I'm not surprised.
WebKit dominates the mobile landscape as the default rendering engine on iOS, Android and even Blackberry. Mozilla's move to WebKit means they have finally admitted that Gecko (alone) cannot win mobile.
Will Firefox for Android suffer?
Of course it will. It stands to reason that a 'native' Mozilla browser tech that uses WebKit would also be faster on Android too. So in time, if Junior turns out to be a real effort, the core base on which Firefox and indeed all of Mozilla's success has been built could be left behind for the mobile world.
As Mozilla itself is now clearly focused on Mobile, its new CEO is from mobile and its new head of PR is too, it also stands to reason that Junior is a serious effort too. A split Gecko/WebKit effort would not be a good thing initially for Firefox, though it is the right decision for Mozilla's mobile aspirations. Mozilla could end up with a two track development cycle, but then again that's not that strange.
The core rendering engine development can potentially be carved out from the interface development - sure it's not the same type of control that Mozilla is used too -- and sure they'd be tied very closely to Google and Apple's WebKit development efforts, but hey that just might be a good thing too.
Who is Charleston Road Registry? The Leading gTLD ApplicantBy Sean Michael Kerner | June 15, 2012
From the 'Google's New Company' files:
If you look at the list of new gTLD applicants, the name Charleston Road Registry is very prominent. The new gTLD nameswere revealed this week, and there are over 1,900 applications in total. The Charleston Road Registry applied for 101 new top level domains including .web, .tech, .search, .cloud .chrome, .youtube and .google.
The list really shouldn't be a surprise, since as it turns out Charleston Road Registry **IS** Google.
Section 18(a) of the ICANN gTLD application form clearly spells out that:
Charleston Road Registry is an American company, wholly owned by Google, which was established to provide registry services to the Internet public.
So why isn't Google just using it's own name? Apparently the rationale is that Google already is a registrar and they wanted to have a separation for Registry services.
Now what about the name Charleston Road Registry?
That's an easy one. Google's official address is on Amphitheatre Parkway, but the Googleplex is actually between that street and…. Charleston Road.
OpenSUSE Linux Delays 12.2 Release in an Effort to Make the Distro BetterBy Sean Michael Kerner | June 14, 2012
From the 'OpenSUSE Lives!' files:
OpenSUSE is a Linux distribution in a period of transition. On Thursday morning the project announced a delay in the latest milestone build of OpenSUSE 12.2 and issued a call for a new development model.
This got me thinking, what's wrong with OpenSUSE's current model? On the surface it looks fine to me with regular milestones releases that seem to be coming with the regularity I'd expect. So I asked Jos Poortvliet, openSUSE community manager what was going on and he provided some very illuminating answers.
One of the items that OpenSUSE developer Stephan Kulow calls for in the new development model is more developer to do the integration work. So does that means that OpenSUSE now somehow has fewer developers working on the project?
"No. In fact, it's the exact opposite. The delay was caused by our growth," Poortvliet said. "Right now -- and unlike other Linux distros - we have a git-like model of development. Instead of 'blessed' developers who maintain packages, we have teams (devel projects) who are collectively responsible for a number of packages. They work with contributors from outside the projects by way of the usual "branch-fix-merge" way of working that the Linux kernel has. Once everything works in the devel project the team creates a merge request for Coolo (our release manager) for Factory, our development tree."
He added that the problem now is that a number of the merge requests can break things in Factory. There are some automated checks in place, but OpenSUSE developers can't see, however, what other packages are affected by the new one and if they break.
"Those breakages have to be fixed by people who can work on the whole project," Poortvliet explained. "They have commit rights everywhere and know about (almost) everything."
Poortvliet suggest that one way to solve the problem would be to introduce one or more 'staging projects' which is an approach that Linux currently takes with the Linux-Next tree, or the MM tree.
"By the numbers, in November 2010, we had 2100 merge requests during that month," Poortvliet said. "In April 2011, we had 2,400. But in November 2011, we had 3,500. Therefore, you can also say that we've had 3500 merge requests in 7 months. Additionally, we had an unprecedented peak of almost 5,000 merge requests in September of 2011. It essentially boils down to 20 percent growth year over year."
It's important to remember thought that OpenSUSE also has the Tumbleweed rolling distribution repository too. One potential option for the future of OpenSUSE is to just abandon the milestone model - adopt Tumbleweed rolling releases as the standard and then just snapshot every six months.
"That's certainly one of the options on the table - and already mentioned a few times in the discussions," Poortvliet said. "Part of the issue with this specific idea is that currently, the way Tumbleweed works makes 'big plumbing' really hard, if not impossible. Basically, Tumbleweed is set up to be re-based every now and then - that's when the big changes happen. Doing them more incrementally is much harder."
CSS3 Flexible Layouts 'Flexbox' Spec Nears Final at W3CBy Sean Michael Kerner | June 13, 2012
From the 'Table Memories' files:
Back in the old days, when I built sites with tables (we didn't have this fancy CSS stuff in the 90's), I would commonly build flexible table layouts that would scale to a certain percentage of a screen size. In the table model, that was easy and we all did it.
CSS with it's awesome abilities, is also more complicated in a lot of was due to an overwhelming number of choices. One of them is the new CSS3 Flexbox, officially known as the CSS Flexible Box Layout Module. It's a specification that has undergone significant change over its development but is no nearing the finish line.
The near final working draft was published on Tuesday and is the Last Call Working Draft. The deadline for comments is 3 July 2012.
So, what is this magic flexbox spec? According to the abstract:
In the flex layout model, the children of a flex container can be laid out in any direction, and can "flex" their sizes, either growing to fill unused space or shrinking to avoid overflowing the parent. Both horizontal and vertical alignment of the children can be easily manipulated. Nesting of these boxes (horizontal inside vertical, or vertical inside horizontal) can be used to build layouts in two dimensions.
Yes this is more precise that table percentage control of the web 1.0 era, but the general idea isn't all that different. Flexible layout and 'flexbox' control is more essential today then ever before as we have more diversity in display sizes. In the table era it was 640x480, 800x600 and maybe 1024x768. In the modern CSS era, anything goes, and on mobile, tablet screens, flexibility is equally important.
Mozilla, Google and Microsoft all have named editors on the Flexbox spec and there is some support in Chrome, Firefox and IE10 already. I strongly suspect that with this spec being finalized by the end of the summer, it will be broadly supported on all browsers by the end of the year.
Linux Foundation Brings Training to the EnterpriseBy Sean Michael Kerner | June 11, 2012
From the 'Learn Linux. Make $' files:
The Linux Foundation is launching a new Enterprise Linux Training program. The program is all about preparing IT pros in a vendor-neutral way for Enterprise Linux architecture deployments.
Linux isn't just for developers anymore, which is where the Linux Foundation's focus had been until now. Now to be fair, Red Hat has had certification in this area for some time, but they aren't exactly vendor neutral are they?
So what are some of the 'enterprise' level courses? The first three that the Linux Foundation has listed include:Cloud Architecture and Deployment, Advanced Performance Tuning and Linux Security
Going a step further, the Linux Foundation is now announcing its 2012 scholarship program to help computer science student learn Linux. There are five scholarships available, each worth approximately $2,500.
Submissions are due by 12:01 a.m. PT on Wednesday, July 11, 2012 and the application form can be found at: https://training.linuxfoundation.org/2012-linux-training-scholarship-program
"Our Linux training program has seen a surge in demand since its inception and we’re happy to be able to provide this valuable service, as well as to offer Linux training opportunities to developers who might not otherwise be able to take advantage of them," Amanda McPherson, vice president of marketing and developer programs at The Linux Foundation, said in statement.
Linux developers and enterprise architect also make their fair share of money too. According to the Linux Foundation, Linux IT pros actually make more money then non-Linux IT pros.
Oracle Loses Top Linux Filesystem DevBy Sean Michael Kerner | June 11, 2012
From the 'Oracle Open Source (?)' files:
Oracle is losing one of its key Linux developers. Chris Mason, leader of the Btrfs filesystem effort officially left Oracle on Friday.
In his farewell note, Mason took an indirect shot at Oracle.
Oracle has been a fantastic place to work, and I really appreciate their support for my projects. But, I've decided to take a new position at Fusion-io....Fusion-io really believes in open source, and I'm excited to help them shape the future of high performance storage.
The first time I ever spoke to Mason, was back in October of 2008. Oracle's PR team was eager to get me connected to Mason, to talk about the future of Linux storage. That was of course, before Oracle acquired Sun and before they had Solaris. Btrfs has matured over the last four years such that it is now beginning to appear as a fully supported technology in enterprise Linux distributions. Throughout it all, Oracle has helped to lead the effort, though it is wider now with lots of contributions.
As a note of pure speculation, perhaps Oracle was looking to tighten up the Btrfs process (kinda/sorta like ZFS) and steer it to become a bit less open. Or perhaps Oracle's not so 'hidden' intellectual property agenda has made it a distasteful place to be for an open source dev -- I'm not sure.
Oracle has lost one of their leading developers, but thanks to the open source model, Oracle will still continue to benefit from his work. Oracle Linux will continue to use Btrfs and no doubt there are other devs at Oracle that will still contribute to the effort as well.
**UPDATE** Got an email from Chris Mason today clarifying the situation. Turns out my speculation is off the mark. Here's what Chris wrote to me:
Oracle has strongly supported my GPL projects over the years, and I was
in no way implying that Oracle does not believe in open source.
Oracle always encouraged and rewarded my contributions to open source
Majority of Open Source Azure Cloud Projects, Hosted on GithubBy Sean Michael Kerner | June 08, 2012
From the 'Everyone but Linus Torvalds Loves Github' files:
No Azure is not open source, though there is a growing ecosystem of open source developers that are building projects. New data from Black Duck that was provided to InternetNews sheds some really interesting light on the state of open source cloud development projects.
According to the data, derived from Black Duck's knowledge based that collects data from the major open source repos, there are some 2,187 open source cloud projects. Amazon represents 44 percent of them, Azure 36 percent, Rackspace comes in at 13 percent and Force.com rounds out the list at 7 percent.
For Azure, 56 percent of projects are hosted on Github, 42 percent on Microsoft's own Codeplex and 2 percent on Sourceforge. I personally would suspect that Microsoft is the leading contributor to Codeplex while everyone else is going to Github.
From a licensing perspective, Azure or otherwise, Black Duck has not yet broken down the projects by license, but that's something that I'd like to see. I'd bet that Apache 2.0 dominates.
Overall, Dave Gruber, Director of Developer Marketing at Black Duck told me that it's clear that Github is the favored open source repository for cloud development project based on his data. For me, it's clear at this point that the vast majority of new open source efforts are headed to Github as well.
Is Microsoft Allowing Ubuntu Linux on Azure without a Patent Deal?By Sean Michael Kerner | June 08, 2012
From the 'Ethical Concerns' files:
Microsoft alleged years ago that Linux and Open Source technologies infringe on over 200 Microsoft patents. It's that basic allegation that Microsoft used to convince Novell/SUSE to sign a a Patent deal and it's the same basic underpinning for a dozen deals with Android vendors.
So why is Microsoft allowing Linux to run on its Azure cloud? In particular, why Ubuntu?
Perhaps there is some kind of hidden deal. It's a question I asked Canonical directly and this is what they told me:
"Canonical has strong principled positions on a range of topics,"Chris Kenyon, Vice President, Sales and Business Development, told me. "We do not and have not made compromises on those positions."
That statement would imply that there is no patent deal with Canonical. But that hardly seems fair to SUSE then right? SUSE went out of its way to sign a special deal with Microsoft. How come Ubuntu gets to 'play' in the same field then?
It's a question that I don't have a definitive answer on. It's the same kind of murky territory that Global organizations also face (in a significantly more serious sense, i'm just reaching and it's not a direct comparison by any means) with China. Everyone wants to make a buck, but no one really wants to compromise their ethical standing.
I haven't yet seen the full pricing for Azure and specifically the differences between SUSE pricing and Ubuntu pricing. Perhaps it's just a margin thing that Microsoft builds in as a way to placate their intellectual property interests. Or perhaps they just don't care at this point as Azure is trying to gain share.
It is however a curious question.
Why IPv6 is STILL Like BrocolliBy Sean Michael Kerner | June 07, 2012
From the 'It's Good For You!' files:
We've all known for years that IPv6 is the way to go as IPv4 address space dwindled.
Three years ago, Leslie Daigle, CTO of the Internet Society explained that IPv6 was like Brocolli– something we know is good for us, but don't want to eat.
Now as we are formally in the IPv6 era, following yesterday's World IPv6 Launch, I thought it would be a good opportunity to ask Daigle if she still thinks that IPv6 is like broccoli.
"I've said in the past that IPv6 is a 'brocolli' technology," Daigle responded. "I still think it is a tech everybody knows it would be good if we ate more of it but nobody wants to eat it without the cheese sauce.
Collectively we're figuring out that broccoli doesn't taste so bad and quite honestly we're ready to clear our plates and get onto desert."
With IPv6 usage now starting to rise – albeit slowly, broccoli is still an apt metaphor in my opinion. While thousands have implemented IPv6, millions more still need to do so.
Google is seeing massive spikes of IPv6, as much as 150 percent in the last year and Facebook has over 20 million IPv6 users.
There are still hundreds of million, no – Billions of users more for the Internet as a whole to move to IPv6. Guess we better stock up on that cheese sauce for the broccoli….
Is Linux Maturing as a Gaming Platform? Humble Thinks So...By Sean Michael Kerner | June 04, 2012
From the 'Game On' files:
One of the thing that some find lacking on the Linux desktop today is the availability of good games.
You know stuff like Star Wars: The Old Republic or Diablo III. Personally, (try to) run those via WINE, but that's not an ideal solution. The right solution is native games (or even cross-platform HTML5 goodness). Ubuntu has been making some specific strides in this area with the addition of games to its Software Center and in particular with the recent availability of the Humble Indie Bundle.
By making games available via the Software Center the idea is that it's 'easier' for user to find and install. While I've never really complained much about RPM based installs, easier is not a bad thing - especially when it comes to gaming when you just want a quick fix.
David Pitkin, Director of Consumer Applications at Canonical told me that the partnership Canonical has with Humble is a natural extension for both companies to promote games on the Ubuntu platform and give users the best experience possible.
As far as the lack of availability for 'big name' games on Linux, Pitkin agreed with me that WINE can work great for some games but that it is not a solution for everything all the time.
"I think that situation has changed if you look at our Top 10 paid applications there are many quality games there," Pitkin said. "Yes, Canonical is seeking out developers today for the Ubuntu Desktop and also don’t forget our future products like the TV and Ubuntu for Android. Ubuntu is a great platform for Apps and games as the Software Center and Humble Indie Bundle have proven."
Time will tell if Pitkin is correct, but given the move toward cross platform HTML5 based development availability of online games for Linux could well be a whole lot easier in the years ahead than it has been in years past.
Fedora Linux 18 Will Boot on UEFI Hardware with Microsoft's HelpBy Sean Michael Kerner | June 01, 2012
From the 'Strange Bedfellows' files:
We've known for some time that Microsoft has been pushing hardware vendors for Secure UEFI as part of Windows 8. The tl;dr version of UEFI is that it's secure encryption on the physical hardware at the pre-boot layer. Basically in order to boot the hardware will have to have a secure key.
The problem with that is that it won't easily allow people to load Linux.
So what's a Linux vendor to do?
Red Hat's Fedora Linux has a solution and it's not one that is entirely satisfactory. Fedora will $BUY$ a key via Microsoft that will enable it to run. This is the solution now being offered up by Fedora developer Matthew Garret (and his blog post has fantastic details about the whole concept and the deliberation)
The key costs $99 and the funds go to VeriSign (though hardware signing is done via Microsoft).
The problem of course is that Fedora will perhaps be tied to Microsoft's Secure UEFI efforts in order to enable Linux on new hardware. The bigger problem would be if Secure UEFI wasn't dealt with and Linux wouldn't run on new hardware at all.
I respect Garrett's position, though I have another solution.
Don't buy, UEFI Windows 8 hardware. Seriously. Why pay the Microsoft tax? Build your own machine, motherboards from computer stores (newegg etc) will have lots of options and you won't need to bother with the Window 8 pre-load either.
The larger question of course is when it comes to server hardware - will Fedora's corporate sponsor Red Hat capitulate as well? Server hardware (arguably equally as important to secure) doesn't seem to be stuck on the same Secure UEFI approach that Microsoft is ramming down hardware vendor throats for Windows 8. When it comes to Servers, Red Hat can and does influence hardware vendors.
Garrett notes that Red Hat could potentially have influenced hardware vendors for UEFI here too, but that would have still left other distros exposed. I don't know, personally I would have like to have seen the hardware vendors come to terms with the reality that there is more than one desktop operating system.
Oracle Loses. All Your APIs Are Belong to UsBy Sean Michael Kerner | June 01, 2012
From the 'Legal Pariah' files:
The modern world of software development relies on APIs in order to interact. APIs are the glue that hold and integrate things together and are as essential as oxygen and water.
In its battle against Google over Android, Oracle tried to copyright Java APIs. If Oracle had won, software development as we know it would have been changed for the worst. If Oracle had won, the modern Internet as we know it could not exist in the same form it does today. If Oracle had won, everyone but Oracle (and big $$) would have lost.
Oracle did not win.
Judge Alsup delivered a decision yesterday that will echo through the ages about copyright and development processes. In his judgement he wrote:
Contrary to Oracle, copyright law does not confer ownership over any and all ways to implement a function or specification, no matter how creative the copyrighted implementation or specification may be. The Act confers ownership only over the specific way in which the author wrote out his version. Others are free to write their own implementation to accomplish the identical function, for, importantly, ideas, concepts and functions cannot be monopolized by copyright.
No doubt the decision will be challenges in the weeks and years to come, but for now it will stand. Software development will continue as it has for the past decade, leveraging APIs to openly integrate software and connect people and technology.