Red Hat Defends Rackspace, Defeats Linux Patent TrollsBy Sean Michael Kerner | March 28, 2013
From the 'East Texas Courtroom Shenanigans' files:
Linux vendor Red Hat announced this AM that is has defeated Uniloc USA, Inc. in a patent law suit.
Uniloc didn't actually sue Red Hat directly, they went after Rackspace (a Red Hat customer). Red Hat indemnifies its customers and has since it launched its Open Source Assurance programback in 2004.
The patent in question is U.S. Patent 5,892,697 which deals with the processing of floating point numbers. The TL;dr version is the judge dis-allowed the patent because mathematical algorithms cannot be patented under US Patent law.
"We salute Red Hat for its outstanding defense and for standing firm with its customers in defeating this patent troll," Alan Schoenbaum, Rackspace General Counsel wrote in a statement. "We hope that many more of these spurious software patent lawsuits will be dismissed on similar grounds."
Rob Tiller, Red Hat’s Assistant General Counsel for IP, said:
"NPE patent lawsuits are a chronic and serious problem for the technology industry. Such lawsuits, which are frequently based on patents that should never have been granted, typically cost millions of dollars to defend. These suits are a plague on innovation, economic growth, and job creation. Courts can help address this problem by determining the validity of patents early and with appropriate care. In this case, Judge Davis did just that, and set a great example for future cases."
This is what indemnification is all about folks. Red Hat said 9 years ago that it would protect its customers against patent trolls (at the time they were worried about SCO).
Here we are in 2013, SCO is a zombie, trolls still walk the East Texas courtrooms and the Open Source Assurance model works as promised.
Does the Oracle SPARC Update Leave Linux Behind?By Sean Michael Kerner | March 27, 2013
From the 'Truth in Advertising' files:
Oracle updated its SPARC chips and servers yesterday. All of it powered by Solaris Unix 11.x
During the live event, Oracle CEO Larry Ellison proclaimed the new SPARC servers to be the fastest computers in the world.
It's a claim we've heard before from him - in reference to his company's x86/Linux based Exa-class systems.
Ellison said during the event that, 'the ultimate software optimization is hardware' - again something that we've heard in reference to Oracle's engineered systems, which are all x86/Linux.
John Fowler, the former Sun executive who is now enjoying a re-birth at Oracle praised Ellison for unleashing a new era of innovation for SPARC - which is a hardware line that many people (myself included) did not think would survive at Oracle.
So what's going on? Are the Exa-class x86/Linux systems the top-end for Oracle? Or is it the SPARC/Solaris UNIX combo?
Truth is that the answer will depend on who you ask and when.
There is no question that Oracle has done right by former Sun customers and extended the innovation curve for them in terms of software and hardware. If you're a Solaris/SPARC user, these are happy days indeed.
On the hand, there are a lot of people that never bought Sun/SPARC/Solaris and there are ALOT of Linux shops out there. Oracle has also invested heavily in Linux and its Exa-class systems.
I personally don't have access to test the same workload performance on say an Exadata machine running Linux and a SPARC machine running Solaris, so it's difficult to gauge the performance numbers.
But that's not the point.
I don't think Oracle is pushing its Linux customers to move to Solaris or vice-versa. For those already in the Solaris camp (and hey if you stuck with Solaris this long...), I suspect they aren't moving. For those that have been running Linux, not likely they'll want to shift to Unix (many of them likely fled from Unix at some point).
Oracle's competitive target with SPARC Solaris is IBM Power/AIX and HP/HP-UX/Itanium.
Oracle's Linux roadmap and product evolution is solid and the fact that Larry Ellison stood on a stage yesterday to announce the 'world's fastest computer' as a Unix machine, doesn't make Linux any less relevant to Oracle, or anyone else.
OpenStack Open Source Cloud Project Setup Set for a Shakeup?By Sean Michael Kerner | March 26, 2013
From the 'Core Projects' files:
In the beginning of the open source OpenStack cloud effort there were two projects - NASA's Nova (aka Nebula) and Rackspace's Swift.
Since then OpenStack has added Cinder, Keystone, Horizon and Quantum as core projects, each enhancing the overall platform with storage, identity, management and networking features.
There are a pile of 'other' projects that exist on the periphery of OpenStack now, including Ceilometer, Heat and Red Dwarf that could expand OpenStack further. The debate about what belong in the 'core' project is one that I spoke to Jonathan Bryce, Exec Director of the OpenStack Foundation, about in October.
Now it looks like the whole way that OpenStack handles project tiers is set to change. There is a proposal that is likely to be discussed in the next Board meeting of the OpenStack Foundation that will introduce an entirely new concept to the way projects are organized.
Today the three tiers of project are : Core, Incubated and Community. Those three tiers could soon disappear.
"The new concept that could be introduced is the idea of an integrated project," OpenStack Board member and co-founder of Mirantis, Boris Renski told me. "It means that a project is part of an integrated release."
In that model, there is a stable release of the specific project that comes out on the official OpenStack platform release date, along with all the other projects. Going a level deeper, within the integrated project list, there will be a few projects that will graduate into a new category of Core projects.
Renski expects that projects like Ceilometer and Heat will not make it into Core. The existing core projects of Nova, Quantum and Cinder are likely to be part of the new Integrated Core category.
Personally I think it's an idea that makes a whole lot of sense. In a way, it reminds me of the Eclipse Release train model, where the 'core' project is the Eclipse IDE itself which is then accompanied by 70 or more projects that call all benefit from the same infrastructure.
[VIDEO] Does an Open Source OpenStack Cloud Mean Better Security Compliance?By Sean Michael Kerner | March 22, 2013
I personally believe that open source is a better methodology for building, procuring and deploying software. However I also know full well that when it comes to security, configuration choices and implementation often make the difference between being breach and being safe.
So when I recently chatted with the Cloud Security Alliance - I asked them if it was possible to bake in security compliance, directly into an open source OpenStack cloud. The executive I spoke with, John Howie was formerly employed by Microsoft, so there might have been a bias - but his organization's view is that open source or proprietary - the same controls are needed to secure the cloud.
Howie also notes (in my video interview with him below) that there is NO SUCH THING as a truly open source cloud...
Open Source AsteriskNOW 3.0 Updates VoIP PBX LinuxBy Sean Michael Kerner | March 21, 2013
From the 'open source VoIP' files
I've been an Asterisk fan (and user) ever since the first release of the open source VoIP PBX back in 2004.
Asterisk is an application that runs on Linux and it can sometimes require some interesting packages and can be somewhat challenging to install, which is what led Digium (the lead sponsor behind Asterisk) to launch AsteriskNOW in 2007.
AsteriskNOW is an all-in-one Asterisk Linux distribution including the core operating system as well as a front end GUI.
AsteriskNOW had its last major update in 2009 with the 2.0 release cycle and is now **FINALLY** getting its 3.0 update.
With AsteriskNOW 3.0, the core Linux base is being updated to the latest CentOS 6.4 release, from the previous CentOS 5.x base.
The underlying Asterisk version is being updated to Asterisk 11, a huge jump from the prior Asterisk 1.8 that had been the core of AsteriskNOW 2.0.
Since it's all open source - you can (and should) download this now for free and give a go yourself
Debian Wheezy Linux Nearing the Finish Line as 100 Bugs RemainBy Sean Michael Kerner | March 20, 2013
From the 'It's Done, When It's Done' files:
Watching Debian Linux releases come together has always been a long and drawn out process. Few other Linux projects (if any) have the same breadth of platform support or packages and few (if any) have the same fiercely principled approach (hurray Debian Free Software Guidelines) to development either.
The next big Debian release - codenamed Wheezy - (all Debian releases in recent memory have been named after Toy Story characters) is nearing the finish line.
First, there are 100 bugs that need to be fixed.
So how is Debian going to deal with those last 100 bugs?
It's a process that will involve discipline and some package cutting too.
In a mailing list posting, Debian developer Julien Cristau wrote:
"We are only interested in the absolute minimum patches that fix RC bugs. Spurious changes will simply lead to longer review times for everyone, disappointment and ultimately a longer freeze.
It helps us if you justify your request sufficiently to save time going back and forth. We don't know all packages intimately, so we rely on you to answer the question "why should this fix be accepted at this stage?"
Going a step further - Cristau added:
"As the release approaches, it's more likely that we will simply remove packages that have open RC bugs."
Debian has long had the philosophy of being done, when it's done. It's a philosophy that has caused trouble in the past (ie. the Sarge release which was delayed for nearly a year back in 2005. It's also a philosophy that works (assuming you can herd cats).
OpenStack Grizzly Open Source Cloud Nears RC1By Sean Michael Kerner | March 20, 2013
The next major release of the open source OpenStack cloud platform, code named 'Grizzly' is nearing release. By the end of this week, all the core projects within OpenStack should be at the Release Candidate (RC) 1 stage.
The RC1s for Keystone and Nova are currently expected by the end of this week.
The way the review system should work is that unless issues are found in the respective RC1 releases, the RC1 will be formally announced as OpenStack 2013.1 final on April 4th.
So yeah, it's close!
The biggest thing for me (at the high-level) that will be part of the Grizzly release is actually the removal of what I personally see as the biggest security risk in all previous iterations of OpenStack. In Nova the volume code has now been entirely removed. That volume code allowed Nova to have direct database access, which is a bug I wrote about in February.
"If an attacker successfully exploits a flaw in the hypervisor (as have been found in KVM and XEN in the past), the attacker can easily tamper with the MySQL database, wreaking havoc on the OpenStack Cloud."bug #823000 warns.
All fixed in Grizzly!
Google Chrome OS Linux WAS Exploited at Pwnium 2013 for $40,000By Sean Michael Kerner | March 18, 2013
From the 'Linux Kernel Exploit' files:
Earlier this month, Google Chrome running Chrome OS (Linux!) was hailed as being a survivor in the Pwnium/Pwn2own event that hacked IE, Firefox and Chrome browsers on Windows. Apple's Safari running on Mac OS X was not hacked and neither (apparently) was Chrome on Chrome OS.
Google disclosed this morning that Chrome on Chrome OS had in fact been exploited - albeit, unreliably. The same researcher that took Google's money last year for exploiting Chrome, known publicly only as 'PinkiePie' was awarded $40,000 for exploiting Chrome/Chrome OS via a Linux kernel bug, config file error and a video parsing flaw.
Google has already fixed the flaws in ChromeOS 25.0.1364.173, BUT seeing as this is a Linux kernel flaw, i'm very curious if this affect any/all other Linux distros.
As is typical for Google, they offer very little in the way of full-disclosure or detail on the flaw fixed. All that Google publicly has posted now is:
 High CVE-2013-0915: Overflow in the GPU process. Credit to Pinkie Pie.
[chromium-os:39733] High CVE-2013-0913: Time-of-Check/Time-of-Use and counting overflows in i915 driver. Credit to Pinkie Pie.
Neither of those issues is specifically identified as a 'Linux kernel' issue. Google has also not publicly opened up those CVE's so it's not possible to see the exact bug (which possibly could be with the kernel). As Google is a responsible firm, I'd suspect/hope that the bug has been submitted upstream, though right now it's not superclear to me where that is..
In any event, it's a chained bug and not something that was a reliable exploit, but still...would/will be good to see it eliminated from the mainline Linux kernel sooner rather than later.
**UPDATE** A patch has been submitted by Google to the LKML for inclusion in the mainline kernel
"It is possible to wrap the counter used to allocate the buffer for relocation copies. This could lead to heap writing overflows," Google developer Kees Cook wrote.
Open Source GCC 4.8 Compiler Including Address Sanitizer SecurityBy Sean Michael Kerner | March 18, 2013
From the 'GNU? What's New?' files:
GCC has been around for 26 years and it remains one of the most important and widely used open source efforts of all time.
This week, the latest incarnation of GCC should be released with GCC 4.8. As with every GCC release, performance optimizations are to be found throughout, improving compilation speed and output.
What stands out to me with GCC 4.8 though are the new security related enhancements. In particular those that go after use-after-free memory errors. Use-after-free errors, in my opinion, remain the dominant risk in many apps today - all you need to do is look at bug/security reports in WebKit or Mozilla Firefox and you'll see why.
To that end GCC 4.8 is including the Google AddressSanitizer technology.
AddressSanitizer , a fast memory error detector, has been added and can be enabled via -fsanitize=address. Memory access instructions will be instrumented to detect heap-, stack-, and global-buffer overflow as well as use-after-free bugs. To get nicer stacktraces, use -fno-omit-frame-pointer.
I've written on AddressSanitizer many times before, but typically in the context of Google Chrome updates. Google and its cadre of security researcher tend to find *lots* of flaws with this tool. Having this built-in/integrated with GCC is a HUGE win for the security of the bazillion developers (accurate number) that use GCC.
Google's ThreadSanitizer which can find data races, is also being backed in - which should help to mitigate the risk of race conditions in compiled code.
GCC 4.8 is currently at the release candidate stage, with the final release expected out later this week.
Facebook secures open source PHP with XHPBy Sean Michael Kerner | March 14, 2013
From the 'HipHop?' files:
Facebook is one of the biggest PHP users on the planet and they seem to think they can do more on their own, then the php community can do. In 2010, Facebook developers built HipHop as a newer/faster PHP runtime.
Now Facebook is going after security with the new XHP extension. The basic idea behind XHP is to make front-end code easier to understand and to help mitigate against Cross Site Scripting (XSS) attacks. In XHP, XML can be used inside of PHP.
"Baking XML into the PHP syntax yields some other advantages which may not be obvious at first. Probably the coolest is that errors in your markup will now be detected on the server at parse time. That is, it is impossible to generate malformed webpages while using XHP," Facebook engineer Marcel Laverdet wrote.
Umm.. yeah that sounds awesome to me.
Though to be fair, it's important to remember that XML and PHP aren't totally isolated from each other. The whole purpose of the first PHP 5.0 release back in 2004 baked in XML support -- it's just that XHP is going that extra step further that the php.net community never did.
XHP is available up on Github now - in my limited use it doesn't seem to break existing PHP apps, so this is likely just a net positive gain for PHP devs.
RIP Google Reader. RSS is Not Dead No Matter What Google SaysBy Sean Michael Kerner | March 14, 2013
From the 'Is Google Evil?' files:
In its infinite wisdom, Google is closing its Reader RSS service on July 1st, 2013.
As a user of Reader since the service was started back in 2005, i'm not particularly happy, though I'm not surprised either.
RSS and Atom feeds are quite literally the fuel that powers my news gathering capability. I subscribe to a large number of feeds that provide me with the information flow I need to do my job. Yes I know Twitter, G+ etc are interesting today too, but neither of those has ever come close to the pure power that RSS delivers to me every day.
The death of Reader is not about the death of RSS - or at least I hope it's not. In recent years, RSS as an idea that is public facing has seemed to decline.
Remember that Mozilla deprecated its' high level view of RSS feed back in 2011 with the launch of Firefox 4.
The Director of Firefox at the time, told me in a video interview that high-level rss feature in Firefox were infrequently used. I disagreed then..but i did see the writing on the wall.
Google Reader was and remains the best online RSS feed viewer and it's an epic shame to see it go, but I will continue to be an RSS user. I've kept a list running locally on my network for the past several years in expectation that this day would come. There are lots of Firefox add-ons that work, though my favorite has always been Sage, lightweight and basic, but it works.
What I hope does not happen as a result of Google Reader's collapse is that sites stop offering RSS feeds at all. But given that one of the world's most popular RSS readers is now going away, how much incentive is there?
I would really like it if Google Does The Right Thing (DTRT) and completely open sources the code behind Reader. That way, someone that doesn't have the short term narrow view of Larry Page can run an instance on OpenStack (or another cloud) and keep RSS alive.
Akamai CSO Andy Ellis Details Linux Usage - VIDEOBy Sean Michael Kerner | March 11, 2013
From the 'Linux Everywhere' files:
It should come as no surprise that Akamai, the world's largest Content Delivery Network uses Linux as a core underpinning for its' 120,000 server network.
What might not be as well known is that Akamai builds its own flavor of Linux to support its network. Akamai's Linux is based on Debian and Chief Security Officer, Andy Ellis explained to me that, it gives his company more control for security.
Mozilla Updates to Firefox 19.0.2 for Pwn2own FlawBy Sean Michael Kerner | March 07, 2013
From the 'That Was Fast!' files:
Late Wednesday at the pwn2own hacking challenge, security firm VUPEN demonstrated a 0day flaw against a fully patched Firefox 19.0.1 browser running on Windows. VUPEN was awarded $60,000 from the contest organizer HP for the exploit.
Less than 24 hrs after the flaw was first reported, Mozilla is out with a fix.
As it turns out the flaw is a Use-After-Free flaw.
"VUPEN Security, via TippingPoint's Zero Day Initiative, reported a use-after-free within the HTML editor when content script is run by the document.execCommand() function while internal editor operations are occurring," Mozilla's advisory stated. "This could allow for arbitrary code execution."
Use After Free errors are relatively common in Firefox updates. Fixing a reported flaw inside of 24hrs isn't really common, for any other browser vendor ...
Red Hat Nudges Real Time Linux Forward with MRG 2.3By Sean Michael Kerner | March 07, 2013
From the 'time is real' files:
Real Time Linux, that is Linux with a deterministic timing component for an action to occur, is big deal for a lot of industries (military among them). Red Hat first announced it's production grade Real Time Linux platform, dubbed MRG back in 2007. Back then, Real Time enhancement were not part of the mainline Linux kernel, but that has changed over the years.
The MRG 2.1 release which debuted in January of 2012, moved the Real Time platform to the mainline Linux 3.0 kernel. With the new MRG 2.3 release announced this week, Red Hat is advancing to the newer 3.6 kernel.
The faster pace of kernel adoption does not occur in Red Hat's flagship Red Hat Enterprise Linux 6.x platform. The Linux 3.6 kernel was first released by Linus Torvalds in October of 2012, providing new disk and memory suspend capabilities.
According to Red Hat, the 3.6 kernel provides new hardware enablement, improved drivers, enhanced security and microsecond determinism.
Additionally the MRG platform is now getting a tech preview of the Precision Time Protocol(PTP). PTP provides improved accuracy over what is available in the commonly used NTP.
OpenStack Ceilometer Bringing Metering to Open Source Grizzly CloudBy Sean Michael Kerner | March 05, 2013
From the 'Metered Cloud' files:
One of the most talked about new technologies for the open source OpenStack cloud platform that I heard about at the last OpenStack Summit was Ceilometer.
Ceilometer provides metering for OpenStack -think billing and usage. It is already in used at multiple cloud providers including AT&T and DreamHost. The effort also has contributions from Rackspace, Red Hat and HP.
While Ceilometer was a separate project for Folsom it has now graduated and is an official OpenStack project. That means that we are like to see Ceilometer as part of the official Grizzly release now set to debut on April the fourth.
Having an integrated metering/measuring infrastructure is a huge advantage and use for IaaS. After all, the NIST definition of cloud defines cloud as a metered service, so how can you meter a service without integrated metering? Now with Ceilometer it's an integrated part of the OpenStack platform as a whole.
Back in October I askedOpenStack Foundation Executive Director Jonathan about what might land in Grizzly from an incubated project perspective, so it's great to see this come to fruition.
Eclipse Releases Open Source Orion 2.0 web-based IDEBy Sean Michael Kerner | March 04, 2013
From the 'Goodbye Desktop IDE' files:
I’ve been following the development of Orion, since the Eclipse Foundation started the effort back in January of 2011. The basic idea behind Orion is to move development online into a web-based development model.
The Orion 1.0 release came out in October of last year, and here we are four months later with an Orion 2.0 release.
The big shift with Orion 2.0 is its ability to run on a node.js server.
"The small footprint of this server makes it suitable for embedded devices and potentially very large scale cloud scenarios," Orion developer John Arthorne wrote. "Having all the client and server tools written in the same language also raises some new possibilities and makes the Orion architecture very flexible."
The next major release of Orion is now scheduled for June. Among the big features planned for Orion 3.0 are expanded deployment options as well as an easier way to get up and running with Orion without the need to first create an account.