RealTime IT News

Blog Archives

.ORG Loses CEO

By Sean Kerner   |    August 26, 2010

From the 'Great Leadership' files:

Alexa Raad (pic left), the President
and Chief Executive Officer of PIR (Public Interest Registry) which administers the .ORG Top Level Domain (TLD) is leaving her post effective on September 24th.

I've had the good fortune to interview Raad at multiple points over the last 3.5 years that she has been CEO and have always been impressed by her foresight and leadership.

The TLD space for years was a relatively static one - it was just about the administration of domain names and the registry/registrar relationship. Things have changed in recent years, and in my view Raad helped to play a crucial role, during a critical period in the evolution of the Internet.

The .ORG TLD under the leadership of Raad was among the first major gTLDs to push for DNSSEC security two years ago. Raad realized that DNSSEC needed a multi-vendor/stakeholder coalition and she was instrumental in the formation of the DNSSEC Industry Coalition. Now in 2010, .ORG is signed for DNSSEC and that major shift in the way the Internet works is now in full swing.

Fedora 14 alpha gets ROOT

By Sean Kerner   |    August 24, 2010

From the 'Famous Physicist' files:

The first alpha of Red Hat's Fedora 14 Linux is now avail, and it sure has a very long list of new features.  There are improvements to security, performance and virtualization as well as some interesting new analysis technology.

"Given that Fedora 14 is named after one of the giants of modern theoretical physics, it seems appropriate that Laughlin sees the introduction to Fedora of '''ROOT''', an object-oriented, open-source platform for data acquisition, simulation and data analysis developed by CERN," Fedora developer Dennis Gilmore wrote in a mailing list posting.

On the security front there is new SCAP (Security Content Automation Protocol) support built-in which is a huge bonus for Fedora in my opinion. I saw a session on the new tech at LinuxCon a few weeks back and I was blown away by it.

Virtualization is always a big theme for Red Hat and in Fedora 14, I think we'll be seeing the biggest user-facing desktop improvement in years. Fedora 14 will include the Spice (Simple Protocol for Independent Computing Environment) as an open source solution for dealing with virtual desktops.

And for netbook users - a MeeGo 1.0 version is also part of the mix.

Lots of exciting stuff for sure to look forward too as Fedora 14 goes through its development process. The final release of Fedora 14 is currently scheduled for November 2, 2010.

OpenSolaris is not dead (yet)

By Sean Kerner   |    August 24, 2010

From the "It's Not Over Until Ellison Sings' files:

The openSolaris Governing Board (OGB) has resigned, leaving the former Sun (now Oracle) open source operating system without a community board for governance.

Does this mean that openSolaris is dead?

Not quite. The key problem (in my view) is that Oracle has been unresponsive to the OGB's requests on multiple issues of clarification. Without a line of communication about the future of openSolaris, I don't think that the OGB really had a choice as they were operating in a vacuum with no information.

That said, as a working journalist, I know full well that Oracle isn't exactly the most responsive of companies to begin with. Don't get me wrong, I have tremendous respect for the excellent PR people that I've had the good fortune to work with at Oracle over the years, but Oracle as a corporate entity is not always going to be immediately responsive. Decisions are not made in the open, they're made in the boardroom and open source or not, that's the way that Oracle operates.

With Solaris 11 development now underway for a 2011 release, Oracle's engineers (also coincidentally in my view the bulk of the contributors to openSolaris) are tasked with building the commercial Unix operating system. Oracle has told me that they are taking bits from openSolaris for Solaris 11 and there will also be new 'commercial' bits too.

When do Firefox users go private?

By Sean Kerner   |    August 24, 2010


From the 'Web Privacy' files:

Privacy Mode (aka Porn Mode) is a browser feature in most modern web browsers that aims to 'hide' your activities from others users on the same system.

Personally, I've always wondered when people use Privacy Mode, and now Mozilla has a study out with the answer.

Via their Test Pilot add-on, which anonymously (with an opt-in model) sends data to Mozilla about browser activities, Mozilla has some interesting insights into how users are using private mode. No, they don't know if it's actually be used for porn -- they actually don't know which sites users are browsing at all. It's just a usage of the feature study, but still interesting stuff.

Turns out the most users only use private mode for 10 minutes at a time. Hmmm what only takes ten minutes to browse?

As for time of day, apparently users are slacking off during work-day hours to get their 'private' fix either. According to Mozilla's data, the times of day with the most usage are:

  1. Lunch: users likely switch into Private Browsing during their lunch breaks.  We see a major spike between 11 and 2pm.
  2. After School / Work: users appear to switch on Private Browsing just after they've returned from work or school, which is around 5pm.
  3. After Dinner: we have another substantial peak between nine and ten pm.
  4. Late Night: a minor spike exists an hour or two after midnight.

Not too surprising -- but good to know (for corp IT at least) that private mode isn't taking up the whole workday. On an anonymous basis, I do think it would be interesting to find out if Private Mode actually is used mostly for viewing illicit content or if it's something less innocuous like online shopping.

Ubuntu 11.04 will be called the Natty Narwhal

By Sean Kerner   |    August 17, 2010

From the 'Animal Kingdom' files:

One of the great attributes of Ubuntu Linux is its interesting codenames.

That tradition will continue in 2011 with the release of Ubuntu 11.04 codenamed, Natty Narwal.

No I've never seen a Narwhal, but I've never seen a Meerkat (that's the upcoming Ubuntu 10.10 release) either. In fact, Ubuntu's naming convention usually results in me using my favorite search engine in an attempt to see what the animal in question looks like.

So, not only do I benefit as Linux user from the new innovations that Ubuntu introduces, I also expand my knowledge of the animal kingdom as well.

"The Narwhal, as an Arctic (and somewhat endangered) animal, is a fitting
reminder of the fact that we have only one spaceship that can host all
of humanity (trust me, a Soyuz won't do for the long haul to Alpha
Centauri)," Ubuntu Founder Mark Shuttleworth wrote in a blog post. "And Ubuntu is all about bringing the generosity of all
contributors in this functional commons of code to the widest possible
audience, it's about treating one another with respect, and it's about
being aware of the complexity and diversity of the ecosystems which feed
us, clothe us and keep us healthy. Being a natty narwhal, of course, means we have some obligation to put our best foot forward."

Linux kernel report shows continued innovation. 2.6.36 coming soon #LinuxCon

By Sean Kerner   |    August 12, 2010

From the 'LWN Rulz' files:

BOSTON. Linux kernel developer Jon Corbet (pic left) took the stage at LinuxCon today to deliver his Linux kernel report.

Corbet said that Linux kernel development is maintaining a fast cadence with about 80 days between Linux releases.

On average from the 2.6.31 to the recent 2.6.35 kernel an average of 1,100 developers contributed to each kernel. On average there were about 124 changes made per day.

In terms of who is contributing, Red Hat retains the top spot among vendors at 12 percent of all contributions. The top contributor overall is the volunteer category coming in at 16.6 percent.

"The list of key contributors has remained relatively static over time," Corbet said.

Web services need to be Free #LinuxCon

By Sean Kerner   |    August 12, 2010

From the "Let Freedom Reign' files:

BOSTON. Stormy Peters, executive director, the GNOME Foundation wants people to think about their online software freedoms.

In her keynote at LinuxCon, Peters echoed some of the same themes from her talk two weeks ago at OSCON about web data freedom.

"We need to be careful that the choices we make today don't affect the choices we have in the future," Peters said.

Web services, whether it's Twitter, Facebook, Gmail or other services could be a risk to users freedom if people aren't careful. Peters suggested that users need to make sure that their data is portable so they can move their data if need be.

She suggested that all users think about access to their data, ensure they're not locked in, make sure they back-up data and take a look at the license.

"What we need is free web services in terms of licenses, cost and data," Peters said.

Why MeeGo is different #LinuxCon

By Sean Kerner   |    August 11, 2010

From the 'Android Who?' files:

BOSTON. MeeGo is different than other mobile operating systems is because it is open.

At LinuxCon, Thomas Miller, Head of MeeGo Ecosystem Development at Nokia (pic far left) and Derek Speed, Senior Technologist at Intel Corporation (pic left) gave a very succinct view of why Meego will win in the end.
"MeeGo is different because companies can participate at the point of development ---instead of just being handed something," Miller said.

Speed added:

"You can't predict where innovation will come from and we believe that the notion of community based investment and people working on creativity and contribution is really the way to go."

Monty's MariaDB extends the open source database #LinuxCon

By Sean Kerner   |    August 11, 2010

From the 'MySQL Legacy' files:

BOSTON. MySQL founder Monty Widenuis (pic left) did a lot of things right in the early days of MySQL.

Speaking the LinuxCon conference Widenuis detailed the history of MySQL and noted that the community involvement early on was key to MySQL's success.

"What makes a successful open source project?" Widenuis said. "Be responsive to the community and treat them well and have good documentation."

As MySQL grew, Widenuis' message of community got pushed back and that's where things started to go wrong. Widenuis also slammed Sun, saying that had no respect for engineering talent. Wiednuis left Sun in 2009 and the main reason for his new MariaDB was about saving the people that he cared about and he wanted a good home for MySQL that he didn't believe was in good hands.

A focus on community is what MariaDB is all about, Widenuis said he is now following a hacking business model. It's not a company that is being built to be sold, it's democratic and employees are all shareholders.

"MariaDB will always be open source and there is no enterprise version," Widenuis said.

Top 10 best practices for enterprise open source adopters #LinuxCon

By Sean Kerner   |    August 11, 2010

From the 'Open Source Has Won' files:

BOSTON. Congratulations open source developers, according to Forrester Research Analyst Jeffrey Hammond (pic left), you're on the winning team.

Hammond delivered a keynote address at LinuxCon today where he said that open source adoption has now crossed the chasm to mainstream adoption.

He noted that according to his data, only 1 in 5 enterprises are not using open source in some way today.

What does it take to gain that adoption and push forward adoption, Hammond offered a top ten list of best practices based on his experience with the enterprise open source adopters.

1. Appoint an OSS steward - make sure there is a go to person that can interface with the right people.
2. Create a comprehensible policy -  About 36 percent of organization Hammond speaks to don't have a policy, Hammond said.
3. Frontload acquisition processes
4. Require project leaders to identify OSS dependencies
5. Use EA to regulate exploitation and maintenance
6. Trust your team - but verify with code scanning utilities
7. Maintain a repository of pre-approved OSS components
8. Don't dwell on process and artifacts , focus on outcomes
9. Don't expect perfection and plan for re-mediation
10. Set a contribution policy - it will happen over time anyways.

Novell pitches Linux innovation #LinuxCon

By Sean Kerner   |    August 11, 2010

From the 'Cloud Rules' files:

BOSTON. Markus Rex Senior vice president and general manager open platform solutions at Novell (pic left) sees the cloud as a platform for innovation.

Rex was speaking at the LinuxCon conference in a keynote address.

"Linux is the universal platform that ties the cloud together," Rex said.

It's not just about the cloud as a delivery model either. Rex noted that workload management, security and tools to build appliances for the cloud (all of which Novell has products for) is the key to really unlocking innovation in the cloud.

Why Android should be in the main Linux kernel #LinuxCon

By Sean Kerner   |    August 10, 2010

From the 'Android Loves Linux' files:

BOSTON. Android is often cited as a success story for mobile Linux. Yet, Google's Android code is no longer part of the mainline Linux kernel.

At LinuxCon, Red Hat engineer Matthew Garrett (pic left) detailed to a standing room only audience, why Android code should return to Linux.

No, Garrett doesn't work for Google, but he did tell the audience that he did interview for a job working on Android - that he didn't get.

The entire discussion around why Android is (or isn't) in the kernel led to some heated debate at the end with Garrett telling one audience member to 'shut up' and after the individual did not Garrett asked the person to leave.

Garrett noted that many developers are now targeting Android for all kinds of code. He added that there is all kinds of Android code for power management and other low-level functions that are really interesting.

"As kernel developers we want this code in the kernel," Garrett said. "When we get code, we get new solutions and sometime you get code that gives new insight to other problems."

He added that some of Android's kernel functionality ends up in drivers. Garrett said that's the code we want to integrate into mainline.

"We want to be able to take Android drivers and have them in the Linux kernel," Garrett said. "We don't want there to be a distinction between an Android driver and a Linux driver." getting major Linux hardware reboot #LinuxCon

By Sean Kerner   |    August 10, 2010

From the "How Hot Dogs Are Made' files:

BOSTON. At the heart of all Linux kernel development is the infrastructure.

In a session at LinuxCon, the Linux Foundation's John Hawley, said that the infrastructure for is set to undergo a major refresh.

Overall he said there will be four new servers coming online soon. Two of them with be stacked with 144 Gigabytes of RAM per box.  The new hardware is being provided by HP.

HP's Bdale Garbee who was present in the audience said, "It's a steaming pile of hardware."

Hawley also noted that runs Fedora Linux as it's a fast moving Linux distribution with support for newer kernel versions. He added that most of the admins at have grown up with Red Hat and that's why they're using Fedora.

When it comes to new kernel releases, Hawley said that new kernel releases typically don't result in large traffic spikes to He noted that in his view more people get their kernel's from their distributions. also provides mirroring for Linux distributions. The top user, in terms of bandwidth is Mandriva at 1.5 Terabytes followed by Fedora at 800 Gigabytes.

The continuum of Linux news #LinuxCon

By Sean Kerner   |    August 10, 2010

From the 'Panel Preview' files:

BOSTON. This afternoon I'm set to be on a panel of my peers in the Linux journalism business. The business of Linux journalism is one that has changed in recent years, but for me the song has remained the same.

I've been a Linux journalist here at since 2003 and while much has happened, there are a few common threads that continue to re-appear. I call it the Continuum of Linux News.

The continuum is what keeps me and my peers busy, it is the ebb and flow of Linux news that I don't think will change any time soon.

The continuum includes:

  • Linux kernel releases (and associated kernel development).
  • Linux distribution releases (and associate events/developments)
  • Linux application/system management
  • My app/hardware  runs on Linux type stories
  • This is the year of the Linux desktop
  • Linux is (in) secure
  • Linux is used by everyone on Earth (stats stories)
  • Legal stories (including the FUD mongers)
  • Linus says (i.e the kernel is bloated)
  • Shuttleworth says ..

AppArmor more user friendly than SELinux? #LinuxCon

By Sean Kerner   |    August 09, 2010


From the 'Linux Security' files:

BOSTON. There are number of access control systems available for Linux but which one is easier to use?

At LinuxCon, Z. Cliffe Schreuders (pic left) a doctoral candidate at Murdoch University in Australia presented the findings of a small usability study he conducted into Linux access control systems.

Long story short, his study of 39 people found that AppArmor was generally found to be more user-friendly than SELinux. SELinux is the system used by Red Hat, while AppArmor is favoured by openSUSE and and Ubuntu.

Lolpolicy for defining Linux security #LinuxCon

By Sean Kerner   |    August 09, 2010

From the 'Useful Lolcats' files:

BOSTON. Ever wonder how lolspeak, the language of lolcats could be used to secure Linux?

At LinuxCon, Joshua Brindle from Linux security vendor Tresys (pic left) detailed something he called lolpolicy for making SELinux security policies easier to manage.

Lolpolicy is Brindle's half-serious implementation of something he referred to as  -CIL (Common Intermediary Language) - which is an intermediate policy language for SELinux. It's an attempt to clean up some of the management layer of SELinux, Brindle said.

Now lolpolicy is one potential language overlay for CIL. So say for example you want to create a policy for your staff - Brindle said you just input 'I iz staff' and if you want full access input 'om nom nom' (yeaah lolspeak is...weird).

SELinux sandboxing for Linux app security #LinuxCon

By Sean Kerner   |    August 09, 2010

From the 'Playing With Sand' files:

BOSTON. SELinux is a great way to limit the access rights/roles on a Linux machine.

But how do you limit CPU or memory usage of a given application?  Red Hat engineer Dan Walsh (pic left) has a solution that he calls SELinux Sandbox which he demoed at the LinuxCon conference today.

Walsh stressed that he's not trying to replace virtualization with SELinux sandboxing, but he is trying to create an easier way to isolate and control applications.

There are alot of people (myself included) that have often struggled with SELinux and its permission system. For those types of users, Walsh has an option too called SEunshare which will enable a user to setup sandboxed without running with full SELinux control.

The effort still isn't complete baked yet from what I saw, but the potential is nothing short of awesome for total Linux security. Any application or even a document could be isolated and 'sandboxed' create an ultra-hygenic environment for computing.

Yes you can do a degree of sanboxing with virtual machines today, but Walsh's approach is faster, more efficient  and likely more flexible too.

Is Illumos 'good' for OpenSolaris?

By Sean Kerner   |    August 04, 2010

From the 'Too Soon?' files:

The ability to fork an application is one of the great strengths of open source software. If a project isn't going in a direction that users/developer want - then a fork provides an option.

Illumos - a new project that is aiming to build open source components of Oracle's openSolaris is not a fork -- at least that's the direction that the project leadership told me last week. What Illumos will provide is a way for users/developers to eventually create a fork of openSolaris.

Yes, it's good news for openSolaris users that there might one day be an option - a fork - that could meet needs that the main project does not. However, it is important to remember that openSolaris (and Solaris) are valuable operating systems because of the tremendous amount of capital/research that Sun (and now Oracle) are investing in the platform.

Red Hat vs. Ubuntu: Why upstream commits matter

By Sean Kerner   |    August 04, 2010

From the 'Linux Fundamentals' files:

There has been some 'debate' that has bubbled to the surface again recently about Ubuntu vs. Red Hat on the issue of who contributes what to Linux.

Red Hat leads the Linux world with its contributions to the core Linux kernel and it also leads with its contributions to the GNOME desktop project as well. Ubuntu on the hand does contribute (not as much), and is focused on 'fit and finish' for the most part.

I personally don't have much issue with the fact that Ubuntu doesn't contribute as much upstream as Red Hat -- though it is something that matters. Let me explain.

Linux and open source community development, is not a communist (or Marxist) model. Karl Marx had the slogan  'To each according to his need; From each according to his ability' which works for communists, not necessarily for Linux. In Linux and open source, the model that I see is more, 'To each according to his need and from each according to his need' as being more appropriate.

If you need something fixed or done, then you make that contribution upstream. Doing everything upstream is the only way that Linux will remain un-fragmented. Without upstream there is no Linux community.

Inside the Black Hat Wi-Fi control room

By Sean Kerner   |    August 03, 2010

From the "Hostile Wi-Fi Networks' files:

It never ceases to amaze me how so much can be done with so little. Case in point is the Aruba Networks provided Wi-Fi network at the recent Black Hat security conference in Las Vegas.

I've got a more detailed story already published on the setup of the network (so be sure to check that out too), but actually seeing the control room adds another dimension.

The control center for the Black Hat Wi-Fi network was actually just a small corner of one desk in one room.  The whole environment was managed and monitored by a cloud based service that Aruba Networks provided. To make it even sweeter, they actually used an iPhone app to do some of the monitoring too.

It's what the cloud and mobile apps are all about - more power with less gear and total mobility. Let's not forget the role of Linux here either, Aruba's gear uses Linux as its underlying operating system. In the control room the two notebooks that Aruba was using were also running Linux (Fedora and CentOS).

So here you go, an inside look at what the control room at Black Hat's Wi-Fi network actually looked like.