Red Hat logo

Red Hat Summit Blog

Training and certification opportunities at Red Hat Summit 2015

Monday, March 23rd, 2015

We’re offering a lot of opportunities for you connect with curriculum instructors, examiners, and developers at the 11th annual Red Hat Summit in Boston. Hands-on, solution-focused training and testing will help you get the most out of your Red Hat® environment and conference experience.

Some of these offers have limited capacity, so get signed up early to save your spot.

CERTIFICATION EXAM: Get OpenStack certified by Red Hat
Gain the skills you need to install, configure, and manage Red Hat Enterprise Linux® OpenStack Platform. Register for the OpenStack certification exam at Red Hat Summit for $600, and get our OpenStack Online Learning course at no cost to help you prepare.Our self-paced, online course provides the equivalent of 4 days of classroom content and hands-on labs. You can train at your own pace, in your own time, prior to taking the exam onsite. Reserve your spot for a certfication exam when you register for Red Hat Summit.

POWER TRAINING: Pre-conference training at Summit 2015
Power Training courses allow you to supplement your Red Hat Summit experience with 2 additional days of hands-on training on specific technologies or areas of interest prior to the event. This year’s course topics include:

  • What’s New in Red Hat Enterprise Linux 7?
  • Red Hat Enterprise Linux 7 Performance Tuning
  • Red Hat High Availability Clustering with Pacemaker
  • Red Hat JBoss® Enterprise Application Platform Administration Essentials
  • Red Hat Cloud Storage with Ceph
  • OpenStack User and Life Cycle Management
  • Managing Containers with Red Hat Enterprise Linux Atomic
  • Red Hat CloudForms Hybrid Cloud Operations

These courses sell out quickly each year, so reserve your spot now. Save $200 on registration for a limited time! You can sign up for a course during Red Hat Summit conference registration as an optional add-on.

HANDS-ON LABS: Deep dive into the newest Red Hat technologies
The best way to learn is to do. Hands-on labs give Summit attendees special access to all Red Hat technologies. We will provide the equipment and expertise. Product experts and solution architects will be on hand to provide immediate guidance. View the full agenda for more details.

  • Hands on Red Hat Enterprise Linux Atomic
  • Hands on with Satellite 6.1
  • Transitioning from Satellite 5 to Satellite 6
  • OpenShift for Developers
  • OpenShift v3.0 for Operators: Optimizing deployments on RHLE7 and Atomic
  • Hands on with CloudForms
  • Racing Camel with BPM and JBoss Fuse
  • Containerizing applications: Existing and new
  • Performing live backups on Red Hat Gluster Storage
  • Integrating Ceph and Red Hat Enterprise Linux Openstack Platform
  • Red Hat Gluster Advanced Features
  • Security Compliance made easy with OpenStack
  • Mobile Application Platforms: A Hands on Introduction
  • Docker for Java Developers
  • Red Hat Enterprise Linux Identity Management


Taste of Training — Attend free hour-long, hands-on guided labs pulled from Red Hat courses and taught by a Red Hat curriculum member. These sessions require advance sign-up, which will be available to registrants before the conference begins.

  • CloudForms Automate and Red Hat Enterprise Virtualization
  • Fabric deployment with Red Hat JBoss Fuse
  • Red Hat high availability Clustering with Pacemaker
  • Managing software & eratta deployment with Red Hat Satellite 6
  • Managing Containers with Red Hat Enterprise Linux Atomic Host
  • Using rhevm-shell to manage Red Hat Enterprise Virtualization
  • CloudForms Automate and Red Hat Enterprise Virtualization
  • Introducation to Ceph
  • OpenStack with Nova Docker
  • Managing software & errata deployment with Red Hat Satellite 6
  • Red Hat high availability clustering with Pacemaker
  • Managing Containers with Red Hat Enterprise Linux Atomic Host

Breakout session — Red Hat Training and Certification Update: Ken Goetz, vice president, Red Hat Global Learning Services; Pete Hnath, Red Hat curriculum; Randolph Russell, director, Red Hat Certification

  • Thursday, June 25, 10:40 a.m: In this session, Red Hat Training and Certification leaders will discuss some of the new directions Red Hat is taking to prepare organizations and individuals for the I.T. challenges of today and tomorrow.  Among these are Red Hat Subscription Learning, a forthcoming offering in support of the continuous learning in which technology professionals must now engage. Other topics covered will include Red Hat’s training and certification plans around DevOps and OpenStack.

NETWORKING OPPORTUNITIES: Red Hat Certified Professional reception
Wednesday, June 24, 6-8 p.m: Attend an exclusive reception for all Red Hat Certified Professionals (RHCP) at Red Hat Summit and enjoy an evening of celebration, food and drinks, and your very own RHCP Red Hat Summit t-shirt. At this event, Randy Russell, director of Certification, will introduce the 2015 RHCP of the Year.

Visit the Red Hat training and certification lounge outside of the Training and Labs zone and speak to members of the training and certification team. Get answers to your certification questions and share your feedback. Lounge will be open on Wednesday, June 24, 12-2 p.m; and Thursday, June 25, 12-2 p.m.



The OpenStack® Word Mark and OpenStack Logo are either registered trademarks / service marks or trademarks / service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation or the OpenStack community.

Red Hat Enteprise Linux roadmap: Summit Q&A

Thursday, June 19th, 2014

If you attended the Red Hat Enterprise Linux roadmap session at this year’s Red Hat Summit, you saw it was a packed house with a lot of great questions. And now that Red Hat Enterprise Linux 7 is generally available, we want to provide even more explanation and details on questions asked by attendees.

Check out what attendees wanted to learn more about – sorted by topic – in the 2014 roadmap session below. You’ll also find links to additional information where available. In case you missed it, you can download the presentation slides here.


Q: You select the packages based on popularity. But do you have a limit on how many packages you give support? Does this limit grow or decrease over time?

A: We select packages based on what enterprise customers need to support their environments; we do not have a set limit or target number of packages. If you have a need for a specific package, we suggest that you file a feature request for us to review. Here’s the link of the customer portal to file a feature request:

Q: How does Red Hat leverage relationships with other vendors such as SAP and Oracle? Are these vendors already working to make their software Red Hat Enterprise Linux 7 compatible?

A: Red Hat works with many ecosystem partners to certify that their products work well with all of the products in our portfolio.  In order to increase reach beyond what we can accomplish on a one to one basis, we invite ecosystem partners to participate in one of our many partner programs. For those we cannot reach directly or through one of these programs, we make betas publicly available for them to download and test with.

Q: Does HERM support include Power and s390 platforms?

A: At present HERM is supported on x86_64 on Red Hat Enterprise Linux 7.

Q: Do you foresee any issues with Red Hat Enterprise Linux 7 supporting current Java apps such as JBoss 6?

A: OpenJDK 7 is the system default Java SE compliant runtime available in Red Hat Enterprise Linux 7.   In addition to being available in Red Hat Enterprise Linux 7, OpenJDK 7 can also be installed in Red Hat Enterprise Linux 5 and 6.   While there are no technical issues preventing applications from running on Red Hat Enterprise Linux 7 using OpenJDK 7, customers are advised to consult the respective third-party ISV application support matrices to ensure that their application is supported in this configuration.

JBoss EAP 6 running on OpenJDK 7 and Red Hat Enterprise Linux 6 is available as a supported configuration today[1]. Please contact Red Hat Global Support Services for information on availability of JBoss EAP on Red Hat Enterprise Linux 7 as a supported configuration.

Red Hat recommends that customers not deploy Java applications on Red Hat Enterprise Linux 7 if they are designed to run on Java 6 (or earlier versions of Java.) Java 6 is available in Red Hat Enterprise Linux 7 for legacy applications that are self-contained and have no dependencies on system Java packages.

Q: Will DTrace be in Red Hat Enterprise Linux 7?

A: SystemTap on Linux provides similar capabilities to DTrace. If you are familiar  with DTrace, and have existing DTrace scripts, you can use the following URLs to convert them into equivalent SystemTap scripts and find helpful examples:

For a complete guide on SystemTap consult:

Q: Why was Thunderbird removed as a mail client?  For desktop users, what are the endorsed alternatives, and what benefits do they offer?

A: As part of the package selection process, Red Hat evaluates the health of a project and looks for an active community to help address and accept bug fixes and security errata, as well as to provide continuous improvement with regards to functionality.  With the support for this project becoming limited, we elected to not include Thunderbird at this point in time.  Red Hat Enterprise Linux 7 will deliver Evolution, which has been delivered with Red Hat Enterprise Linux 5 and 6, as it is still an actively maintained project.

Red Hat continues to evaluate Thunderbird for inclusion in Red Hat Enterprise Linux 7 and may include it in a future release. Users who wish to use Thunderbird today are encouraged to download and install Thunderbird from Fedora EPEL 7.


Q: Believe it or not we have legacy Red Hat Enterprise Linux 4 applications that we can’t move off… yet.  Could they be put into a container?

A: We do not plan to support Red Hat Enterprise Linux 4 as a container format.

Q: Can I limit the CPU usage of all processes except login on terminals using containers? In my environment today I sometimes find a virtual machine stuck at 100% CPU utilization – due to a Java process – and can’t login to see what is happening! Note: we use LDAP-AD authentication… which may be why login doesn’t work.

A: Yes, you could do this with cgroups.  As containers are built by using various control groups  features, and in that two of the control groups, namely CPU and CPUSETS, are specifically designed to limit CPU usage on the processes inside a control group, you will be able to limit CPU usage by processes inside of a given container.  In greater detail, you could limit CPU usage by either limiting the logical CPU(s) that the process can be scheduled on or by limiting the amount of the time that a process can schedule/use a particular processor.  As for the run-away Java application in a virtual machine that eats up all CPU cycles… another way to access this system when this happens is to enable the sched_autogroup parameter:

bash# echo ‘1’ > /proc/sys/kernel/sched_autogroup_enabled

This will automatically configure the Linux  feature (cgroups) to group together related processes so that, under heavy load, certain processes still remain performant under the kernel scheduler. It allows a user login session to be free from the run-away cpu/memory process and allows an administrator to get into the system (…and kill  the run-away app). Also, for OpenJDK instances, we recommend you take a look at Thermostat, which is available via RHSCL 1.1. Thermostat will allow you to monitor, instrument and manage the resources associated with a running JVM on a physical, virtual, or cloud instance of Red Hat Enterprise Linux.

Q: How do containers functionally differ from Solaris zones?

A: Containers are conceptually similar to Solaris zones.  Containers, however, are implemented differently via different kernel and user space components in Linux.  The four parts to obtain a container environment are (1) control groups, (2) namespaces, (3) SELinux, and (4) systemd.  Control group and namespaces are implemented in the Linux kernel and exported to the user space.  SELinux (in the user space) provides security and access control for the processes inside the container and systemd provides the user space management infrastructure for the setup and tear down of the containers (in conjunction with Docker). Docker itself acts as the user interface to help create and launch applications inside of a container.


Q: Why is XFS the default file system in Red Hat Enterprise Linux 7?

A: Given the growth of data and file sizes, we felt it was important to introduce a scalable file system that can also handle large files, which is what XFS delivers.  We are still providing  full support for ext4, along with enhancements in the size of the files and the file system itself– a limit of 50TB which is up from the 16TB limit in Red Hat Enterprise Linux 6.  Customers can change the default to use whichever file system best satisfies their business needs.

Q: Are there any enhancements under Red Hat Enterprise Linux 7 to recover read only file systems without needing to perform a host reboot?

A: There are no changes in this area in Red Hat Enterprise Linux 7. For a non-root file system, you can unmount, complete the required admin tasks (to fix the file system), and mount it again. For a root file system you will likely need to reboot (possibly to single-user mode) in order to repair possible file system corruption.

Q: Can partitions be re-sized with or without downtime when needed?

A: Yes, you can re-size partitions; refer to the Storage Administration Guide. The guide explains that re-sizing can be done without a reboot if you unmount any partitions on the device and turn off any swap space on the device.

Q: With respect to file systems, are there any new features to help with the configuration of NFS4 with Kerberos?

A: Yes. We now have tighter integration with Red Hat Enterprise Linux Identity Management (IdM) services. Please refer to Steve Dickson’s Evolving & Improving Red Hat Enterprise Linux NFS presentation for additional information.

Q: What about an interface to setup a clustered NFS or any clustered file system?

A: This is under active development, based on the new capabilities of Pacemaker in Red Hat Enterprise Linux 7. Keep an eye out for new features in this area in future releases.


Q: Minimal install – why does this include infrared utils, cd writing tools, Bluetooth tools, etc.? I consider these bloatware for headless servers.

A:  The minimal install is the smallest recommended environment suitable for general use. For a smaller, but still relatively functional environment, use the @Base  group in a kickstart file. For an absolute bare-bones installation set, use the @core package group which is a subset of @base.

Q: Does the improved text mode installer interface include VTY support on Power systems?

A:  Yes.  Note however that terminal multiplexing during installation is provided by tmux; the VTY device should be fine to run the text mode installer.

Q: About the installer… As of Red Hat Enterprise Linux 5, the ability to install on a server via serial
console was made inoperable. This was not fixed in Red Hat Enterprise Linux 6. In order to install, I have to make custom boot images with hacked syslinux options. Is this fixed in Red Hat Enterprise Linux 7, or perhaps there are plans to fix this?

A:  The revamped text mode in the Red Hat Enterprise Linux  7 installer makes it more suitable for serial consoles.


Q: Are there any new features to support Microsoft Windows integration like integrating with Active Directory, user accounts, shares and such?

A:  In addition to the SAMBA and CIFS support, Red Hat Enterprise Linux includes updates to LibreOffice which will facilitate file sharing via Sharepoint, an update to Evolution that can integrate with Exchange Server and support calendaring across a heterogeneous environment, enhancements to Identity Management for easier integration with Active Directory as well as a cross realm Kerberos trust model for easier interoperability between Active Directory and LDAP.

Q: Does Red Hat Enterprise Linux 7 Desktop support authentication of Active Directory Smartcard only accounts?

A:  Yes, this should work fine via pam_pkcs11 and pam_krb5.


Q: What are the differences between network bonding and teaming?

A:  Team Driver is new for RHEL 7 and provides a mechanism to team multiple network devices (ports) into a single logical interface at the data link layer (Layer 2).  This is typically used to increase the maximum bandwidth and provide redundancy.

Even though this capability is similar to the existing Linux kernel bonding driver, the Team Driver project doesn’t try to replicate it and instead solves the same problem very differently using a modern, modular user-space based controlling approach.  Only the necessary data fast-path parts are found in the kernel and the majority of the logic is implemented as a user space daemon.  This approach provides a number of advantages over traditional bonding including more stability, easier debugging, and much simpler to extend while still providing equal or better performance in some cases.

Team Driver supports LACP (IEEE 802.3ad) and can be managed from either NetworkManager or the traditional network initscripts infrastructure.


Q: We are still on Red Hat Enterprise Linux 5, and have around 300 systems… is it worth migrating to Red Hat Enterprise Linux 6 now, or should we wait and move directly to Red Hat Enterprise Linux 7?

A: Our general recommendation is to use the most recent version of Red Hat Enterprise Linux for maximum performance, stability, features and hardware support.

Q: I understand that upgrading from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 6 is unsupported.  Does the existence of an ‘upgrade assist’ feature in Red Hat Enterprise Linux 7 suggest that Red Hat will support and upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7?

A: Yes, with certain limitations. For more information, refer to this knowledge base article at the Red Hat Customer Portal.

Q: What about an upgrade assistant from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 7? Context: enterprise customer with significant inertia / resistance to change. And will Red Hat Enterprise Linux 7 support in-place migration from Red Hat Enterprise Linux 5?

A: We have investigated this and determined that there are a large number of incompatibilities between the two releases and that an in-place upgrade will result in a customer having to perform a significant amount of clean-up.  That being said, we are investigating what we can do with the pre-migration assistant tool to help customers plan a migration from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 7.


Q: Are there any new features to integrate Red Hat Enterprise Linux and VMware and vCenter?

A: Red Hat Enterprise Linux has been supported as a guest on VMware since Red Hat Enterprise Linux 4; Red Hat Enterprise Linux 7 will be supported as a guest on VMware.  In addition, Red Hat has collaborated with VMware to have openvm-tools added to Red Hat Enterprise Linux 7. Openvm-tools was designed to simplify the deployment of Linux guests in a VMware environment. For addition information visit:

Q: Is NPIV supported in KVM?

A: NPIV is supported with KVM. For additional information on how to setup NPIV with libvirt, visit:

Q: Will Red Hat Enterprise Linux 7 x86_64 host support an IA64 Red Hat Enterprise Linux 5 or IA64 Red Hat Enterprise Linux 6 guest?

A: No, Red Hat Enterprise Linux 7 with KVM will only support x86_64 guests. KVM is a virtual machine hypervisor, not an emulator. The Red Hat Enterprise Linux virtualization support matrix may provide you with additional insight.

Q: Will the utilities for upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 work if Red Hat Enterprise Linux is a guest that resides on vSphere?

A: Yes. You should be able to upgrade a Red Hat Enterprise Linux 6 guest to a Red Hat Enterprise Linux 7 guest.


Q: We have six Red Hat Satellite servers and we’re about to build the seventh – all are disconnected – my Q: how do we synchronize the channel dump ISOs we download from

A:  If you need to update channel content to Satellite servers which cannot connect to each other the best option is to mount the content ISOs locally by:

  1. Loading the initial content ISOs downloaded from the Customer Portal and mounting to a drive which can be used local to each Satellite. Click here for access to documentation on how to mount the individual ISOs into a directory and to run the  ‘satellite-sync -m /path/to/mount-point -c <name of channel>’  command to sync the channel content.
  2. Once a Satellite has finished the initial sync of channel content, the channel content can be updated periodically by using the Incremental Content ISOs found on the Customer Portal. Click here to review example Red Hat Enterprise Linux 6 64-bit Satellite 5.5 incremental content ISOs.

You can also find more info in this kbase article:


Q: Where can I get the SELinux coloring book?

A: You can download the PDF and browse the source here. If you find any errors or issues, you can even submit a patch and git pull request!

Live from the Summit: Steering wheels and easy buttons for deploying OpenStack with high availability

Wednesday, April 16th, 2014

RHELOSP-HA-sessionWith an opening slide of a Formula 1 racer, Red Hat’s Arthur Berezin, senior technical product manager for virtualization, drew parallels between the sport and deploying OpenStack.

Driving a car without your steering wheel is something you obviously don’t want to do,” Berezin said, “and sometimes it feels like you’re doing that with OpenStack—you’re the driver, and you need a way to control your deployment.”

High availability means “100% 99.999% uptime and making sure everything runs consistently on high scale,” he said, before introducing a new, easy way to deploy OpenStack and ensure high availability.

Berezin demonstrated upstream features in the RDO community that will be coming downstream to Red Hat Enterprise Linux OpenStack Platform soon: Foreman and Staypuft.

Foreman is an open source system for managing configurations, provisioning, and monitoring on multiple cloud providers including OpenStack. It’s a steering wheel of sorts, while Staypuft is “an easy button,” as Berezin described it, for installing Foreman.


Berezin walked through OpenStack examples with Horizon, Cinder, and Network, explaining what features the RDO community uses now to ensure high availability for services, database, and messaging:

  • Services
    • Pacemaker cluster
    • HAProxy load balancer
  • Database
    • Galera DB replication
  • Messaging
    • RabbitMQ mirrored queues

Choose your options, hit deploy, and you’ve got a highly available environment,” said Berezin.

Staypuft puts you in control with dead-simple features to reduce deployment time and make life a little easier.

With a single selection button, you control your nodes and you choose the services you want,” Berezin said. “It’s that easy.”


More information


Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Cloud deep dive
Technical difficulty: 3
Title: Red Hat Enterprise Linux OpenStack Platform high availability
Speaker: Arthur Berezin

Live from the Summit: Reset expectations for building enterprise clouds

Wednesday, April 16th, 2014


As a former analyst, Alessandro Perilli has found a way to see through the promises of cloud to the real-world problems every enterprise encounters when building a private cloud.

When we think about cloud, he says, we have certain expectations:

  • Simple
  • Cheap-ish
  • Self service
  • Standardize and fully automated
  • Elastic (application level)
  • Resource consumption-based
  • Infinitely scalable (infrastructure level)
  • Minimally governed

But what we actually get is dramatically different. (If you need a visual, Perilli compared our current expectations of private cloud to a winged Pegasus, and our actual progress on private cloud to a donkey to drive the point home.)

The differences between private enterprise clouds and public are enormous. Compare the list below to our expectations, above:

  • The (enterprise) private cloud
  • Undeniably complex
  • Expensive
  • Self-service (The only similarity to the above list)
  • Partially standardized and barely automated
  • All but elastic
  • Mostly unmetered
  • Capacity constrained
  • Heavily governed

As an ex-Gartner analyst, Perilli shared what he called a (very mature) private cloud reference architecture. It includes many layers (5) plus self-service provisioning, chargeback, and self-service provisioning on top of a physical infrastructure. Several vendors (including Red Hat) can sell all of these components.

So what actually makes up a cloud? Do you need all those layers? And if you don’t have them today, when will you get them?

Gartner identified 5 phases to get you from a traditional datacenter to a public cloud. Getting from phase 1 (Technologically proficient) to stage 2 (operationally ready) can take years. That’s why Perilli recommends you take a “dev & test” approach to private clouds. “Very few private clouds are in production. There are a few industries where we’ve seen them successful.”


“It’s easier said than done” to be fully standardized and automated. There’s a progression of complexity as you try to provision applications. The more components an app has, the more provisioning is required. The more pieces you try to automate, the harder your job will be. You also have legacy systems that need to be integrated. And when you design applications in the physical world, provisioning isn’t always well documented and, therefore, not repeatable. And, truth be told, it may take you 1 year to convert 10 applications (of the thousands in the enterprise).


What does “cloud application” mean anyway, Perilli asked. The application should have cloud characteristics like rapid scaling and elasticity, self-service, etc. And it should be architected to be automated, failure aware, parallelizable, etc. And, Perilli, noted that most developers and applications don’t fit the cloud model. He says it’s a matter of culture and training. And most organizations, except for Netflix, don’t have everything it takes to do it right. “Enterprises take a long, long, long time to get there,” Perilli said.


Because we’re too busy with provisioning and orchestration issues, charge-back capabilities are taking a back seat. There are resources and licenses to manage, and if you aren’t careful, you’ll miss the charge-back opportunity.


There’s a massive need for governance, but customers aren’t successful in production clouds. You need an approval workflow, and there’s a need for applications to be removed when not needed. We need process governance, but aren’t at the maturity level to provide it.

So what do we do? Stay away from private cloud altogether? Nope.


Speed to market and quality. Huge demand for Platform-as-a-Service (PaaS) bundles. Perilli says customers have shared these kinds of successes with him. The key, according to Perilli, is to buy only the cloud you need. Some providers sell you more than you need, and you end up using only 1/12th of their modules.

More tips from Perilli:

  • Don’t believe the promise of fully automated production cloud. Build a test cloud instead and be pragmatic.
  • Introduce support for scale out application in a meaningful way. Consider a multi-tier cloud architecture.

As a new Red Hatter, Perilli provided a quick overview of the capabilities of Red Hat’s cloud architecture. In particular, he pointed out CloudForms as a way to manage disparate systems under a single entity. And instead of buying more than you need, Red Hat plans to rely on certified ISVs to provide more management capabilities on top of our cloud. It’s a lean, smart approach, and one we’ll hear more of in the months to come.


More information


Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Red Hat Cloud
Technical Difficulty: 1
Title: Building enterprise clouds: Resetting expectations to get them right
Speaker: Alessandro Perilli

Live from the Summit: Getting technology across the innovation gap

Wednesday, April 16th, 2014


Brian Stevens, executive vice president and chief technology officer at Red Hat, brought emerging technologies to the main stage in his Wednesday morning keynote. His multi-part, video-laced talk touched on topics that ranged from development practices to services-based deployment.

“It’s no longer about one singular, large transformation shift. It’s not ‘what’s the next big thing’–it’s how we take community-developed technology and get it across the gap to enterprise IT operations. It’s about countless small, incremental changes.”

Stevens broke his talk into chapters, taking a good look at five areas of intense interest today. Each chapter included a short video from influential experts in each area.


In the past, good development meant supporting thousands of types of hardware. Today that also includes hypervisors. (That’s a lot of stuff.)

RPM used to be the tool of choice for building hardware from source, verifying packages, building application repositories, installing and uninstalling packages, and managing dependencies. RPM was useful, and it helped Red Hat Linux grow in popularity the way that it did.  RPM was ported to other to other OSes and is still in widespread use today–17 years later. We continue trying to jam more things into RPM, even though some are not a good fit.

Then came dotCloud with Docker. In the Docker model, applications are layers in a filesystem. You can use a Docker application as-is, or add and remove layers to fit your needs. Developers can build the apps they like within Docker, without having to worry about where the app is going to run.

“The amount of things you can build with Docker is mind-blowing–even to us,” said Solomon Hykes, dotCloud’s CTO.

> Watch the video: Solomon Hykes (dotCloud CTO)  and Ben Golub (dotCloud CEO)  talk about Docker and Red Hat.


It’s been 8 years since people first started talking about cloud, and we are now 3 or 4 years into serious OpenShift and OpenStack (or PaaS and IaaS) development. There are benefits of these technologies separately, but as they become better integrated the benefits grow. For example, shared monitoring tools and APIs can help admins monitor and maintain applications all the way down through the hardware. And Red Hat teams are very involved in OpenStack, and interested in integrating OpenStack technologies with our solutions.

> Watch the video: Clayton Coleman from OpenShift talks about…


With traditional IT, the more stuff we do, the more systems we need. The more systems we need, the more admins we hire. Data and app growth is making old-fashioned system administration unsustainable. Today’s admins are using software to orchestrate and automate the IT environment. There’s one catch: Existing IT function must be modular so that services can be orchestrated.

“Virtualization gets us part of the way there,” Stevens said. Starting a virtual machine and configuring it using scripts and tools is common, and software-based storage is getting there.  “Red Hat Storage–or Gluster for Red Hat–was designed with an API in mind so IT and apps can scale out,” said Stevens. “The network has always been the bottleneck.”

The ability to flexibly deploy tiers of applications and data is coming, and projects like OpenDaylight, OpenStack’s Neutron, and Open vSwitch (OVS) are advancing software-defined networking efforts.

> Watch the video: Chris Wright from Red Hat talks about connecting OpenDaylight to OpenStack.


Another way to automate massive processes–another theme of Stevens’ talk–is by changing the workflow model developers use. We’ve seen movement away from the 1950’s style waterfall engineering to more agile technologies that can deal with moving targets and the expense of mistakes. “We live in a world of user experience, collaboration, agility, and change, said Stevens. “Conflicting requirements and magnified, massive-scale projects shared by so many [mean that the] legacy model can’t survive.”

Continuous integration and continuous development (CI/CD) provides a stream of updates and new technology, with the end result being, as Stevens said, “a running system every day.” For Red Hat, one of the best benefactors of a system of incremental improvement is Red Hat Enterprise Linux.” Stevens said, “Red Hat Enterprise Linux 7 will be the first release to experience the value of continuous integration and continuous development.”

> Watch the video: Mark McLoughlin, a leading contributor to OpenStack and a Red Hat engineer, talks about OpenStack’s TripleO and what it contributes to CI/CD.

Stevens admitted that keeping up with OpenStack can be challenging. 6-month cycles (or “mini-waterfalls”) can make it difficult to see what the result will be until the end. CI/CD can help with this, letting both upstream maintainers and downstream organizations accept and reject change as it happens.


Traditional storage of data is expensive, leaving many businesses unable to store everything. Instead, they must pick and choose what is retained or store data in different places. Loss of data–and data that is stored in silos–can limit business intelligence capability.

Red Hat Storage, based on Gluster, helps solve this problem from both sides. It can store, scale, and secure petabytes of data and provide analytics so users can find what they need. Tools from other communities (like MapReduce from Hadoop) are integrated–again, taking advantage of the standardization and flexibility of open source development.

> Watch the video: Steve Watt from the Red Hat Storage team, talks about Sahara (formerly Savanna), Hadoop, and other storage advances.

Red Hat has contributors involved in all of these communities and projects–and many more. “Innovation is not one big massive invention–it’s a series of smaller micro inventions.” said Stevens. And over time, with collaboration and integration, these chapters come together to help smooth the path from raw, new ideas to innovation ready for the enterprise.

Each piece, and each person that contributes, is part of that journey.


Event: Red Hat Summit 2014
Date: Wed, April 16, 2014
Type: Keynote
Title: Bridging the gap between community and the enterprise
Speaker: Brian Stevens (Red Hat)

Live from the Summit: Build scalable infrastructure with Red Hat Enterprise Linux OpenStack Platform

Wednesday, April 16th, 2014

Will Foster, Dan Radez, and Kambiz Aghaiepour–all senior engineers with Red Hat–wanted more automation in their environment. “What we did not want to do was be in the business of manually managing the building of [OpenStack] clusters,” Aghaiepour said. That’s a common problem for enterprises–or anyone–thinking about performance benchmarking and scalability testing on OpenStack. But it was also an opportunity.

Before too long, they had 9 racks with 200 baremetal nodes running Red Hat Enterprise Linux OpenStack Platform 4 (based on Havana) on Red Hat Enterprise Linux 6.5. They used Foreman 1.5 (part of Red Hat Satellite 6) for node provisioning and hostgroup-driven OpenStack deployment. Other tools or technologies used included:

  • OpenFlow 1.1
  • IPMI (intelligent platform management interface)
  • Nova Compute
  • Neutron networking
  • Controller
  • OpenStack storage (GlusterFS)
  • Puppet
  • Staypuft (OpenStack Foreman installer)

Recommended best practices were around utility services and configuration management. For services, the group administered Puppet, PXE, DHCP, and DNS through Foreman, which keeps things in one place and eases administrative sprawl. For config management, they used Puppetmaster through Foreman and distributed revision control systems. “Anything we do once or twice, we never want to do again. We automate it,” said Foster.

Foster also recommended doing as much as you can through Kickstart %post. Foreman, he said, makes this easy. They also used Linux software RAID. Foster noted that most modern CPUs can handle RAID overhead. His final bit of systems design advice: Keep it simple. Use shared storage for important stuff.  Nodes should be a commodity–it should be faster to spin up a new one than to fix an old one.

Aghaiepour demoed the automated environment–showing off to the packed room how quickly (and easily) a 70-node cluster could be erased, redefined, reprovisioned, and deployed. The whole process took less than 10 minutes. He also demonstrated how the instance exported data to a calendaring tool. This calendar keeps track of the node types that are going to be present in the cluster, so that engineers can figure out when they should plan to test applications or services that require a particular node type.

Want to try setting up a scalable infrastructure of your own?

> Watch the demo: Automated OpenStack deployments with Foreman and Puppet

> Get the provisioning demo tools and scripts from GitHub.

This kind of tooling and DevOps work is intended to help engineers get their jobs done.  IT no longer has to provision servers, or even set up VMs–with enough careful planning and the right automation tools, users can spin up and take down their own instances. And the instances themselves can be responsive to the intended use with proper APIs, groups, and settings.  No new hardware needed.

More information


Event: Red Hat Summit 2014
Date: 10:40 a.m., Wed April 16, 2014
Type: Session
Track: Application and platform infrastructure
Technical difficulty: 3
Title: Building scalable cloud infrastructure using RHEL-OSP
Will Foster (sr systems engineer, Red Hat), Kambiz Aghaiepour (principal software engineer, Red Hat), Dan Radez (senior software engineer, Red Hat)

Executive Exchange: Leading in the era of hyper connectivity with Thomas Koulopoulus

Wednesday, April 16th, 2014

header_text_exec_1170“Expunge IT from our vocabulary. It is not separate from the business,” said author and Red Hat Executive Exchange speaker, Thomas Koulopoulus. “It is the business, damn it.”

That’s the way Koulopoulus ended his talk with executives attending the one-day conference in San Francisco on Tuesday. Everything that came before built the case for us to completely rethink our approach to IT.

He doesn’t subscribe to the notion that everything that can be invented has been invented. “We’re not even close,” Koulopoulus said. “But we behave as though the best stuff has already been invented.” And we create generational chasms to help justify why we aren’t keeping up. Generation X. Millennials. Baby boomers. He believes these titles were created as a way for us to excuse ourselves from adapting to a rapidly changing world.


So, if you had to choose a word that best defines our ability to reshape society, technology, and business over the past 200 years, war would it be? Attendees guessed “information,” “democracy,” “capitalism,” and the Internet. And what is the word that defines the greatest CHALLENGE to innovation? Guesses included ubiquity, focus, communication, intelligence. Koulopoulus suggested that 1 word works for both: connections.

“This is what makes us different that Socrates and Plato,” he said. “[Connections] are fundamentally what has changed the human experience more than anything else.” So how are we connecting, and how will our connections change us in the future? No one can predict the future (except sci-fi writers), he said. (See this AT&T ad for proof.) But the velocity at which we’re creating new technology, and the sheer amount of people on the planet (who are living longer) means we are on a path of imminent change.

How are we connecting?

  • In 1800, the global population reached 1 billion people for the first time
  • We are projected to have 10 billion people at 2080
  • It is estimated that by 2020 we will have 2.8 trillion machine (or computer-based) connections

The confluence of machine, data, and human connections is creating a new form of intelligence. Cloud is becoming an intelligent organism. And we’re surrounded by sensors in our cars, home, in stores, and in cities. A virtual tsunami of information is coming at us. “The number of grains of sand in the world is less than 1% of the data we will have in 2100,” Koulopoulus said.


It’s only natural that we have a hard time processing big numbers like this. And in his upcoming book <<>>, The Gen Z Effect: How the Hyperconnected Generation is Changing Business Forever, he explores how younger folks will look at business entirely differently than we do. For example, to them, IT isn’t a separate department from the business—it IS the business. Kids don’t get “aloneness,” he said. They are always connected to their friends in different ways. As his son told him when Koulopoulos told him to go outside and play instead of playing video games: “Dad, this [game] is my cup-de-sac.”

But blaming behaviors on a title like “millennial” is a mistake. “You have a set of behaviors that define what it means to be a part of a new society,” he said. “If you do adopt these behaviors then you become a functioning member of the new society. If you don’t, you’re disconnected. ‘Gen Z’ is just a set of behaviors we decide to take on.” If you’ve read Sherry Turkle’s book, Alone Together: Why we expect more from technology and less from each other, you know that our interactions with technology are at an early stage, and we have control over our future—if we choose to take it.


So how to we boldly go into the future knowing what we know is waiting for is this chaotic and overwhelming? Koulopoulus suggests that we can’t try and go directly into the future, rather we can take the techie path—a twisty, windy path with many diversions and turns. Or we can skip some steps like our grandparents did with iPads. They didn’t dabble on Commodore computers with us. They used typewriters, then slingshotted to an iPad and they’re texting us.

The connections we talked about earlier are shaping tomorrow’s trends. “We will invent highly personalized communities that are hyper local and hyper global at the same time,” he said. And we’ll trade on behavior. We’ll have deep, predictive knowledge of immensely complex systems.

Another prediction is that transparency will apply to business and government. Transparency creates an understanding of behavior, and so do technology and data. So the threat is to live with transparency while providing security. Your reaction times are shortened, and the amount of data you have access to is radically bigger. “The threat is never where you look for it,” he said.


Those in IT and marketing are swimming in the rip currents, Koulopoulos said. And the frustration we feel as CIOs and leaders of IT is that we’re treading water. Just don’t get stuck

“You can’t use the patterns of the past to navigate the future,” he said. “Our role is as leaders—not just a catalyst. Business people won’t truly understand the power available or the quagmire. You are the leaders. Your job is to get a seat at the table. If you can’t, you’ll get commoditized.”

Find any model and any business where the innovation hasn’t been foundational for that industry and has been built and supported on the bedrock of IT.

“You are the innovators,” he said. “If you’re still thinking, ‘But I’m not,’ that has to be your mission. If you don’t do it, data scientists will. They are not IT they’re business folks. That’s a huge threat.”

  • Move from product ownership to strategy and service organization
  • We are missing the predictive view. Operations looks at dashboards, and business analysts look through the rear-view mirror, he said. So who’s looking through the windshield? It should do that. Change the way things are done and choose the behaviors you want to adopt.
  • What is IT? That’s the question.

As for Koulopoulos, “I look forward to that day when I say, ‘I want to get off the train.’ The world will look different to me then. Completely unrecognizable.”


Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014

Live from the Summit: Using Red Hat products in public clouds

Wednesday, April 16th, 2014


When you’re looking to run your Red Hat-based applications in a public cloud—almost always as part of a hybrid cloud deployment—there are two broad aspects to consider. The first is the overall economics and suitability of public clouds for a specific workload. The second is the specific Red Hat offerings available through the Certified Cloud Provider (CCP) program. Those were the topics covered by Red Hat’s Gordon Haff and Jane Circle in their “How to use Red Hat solutions in a public cloud” presentation.

Haff focused on general considerations associated with using public clouds. Consider the nature of your workloads. Public clouds (and indeed private clouds on infrastructure such as Red Hat Enterprise Linux OpenStack Platform) are optimally matched with workloads that are stateless, latency insensitive, and that scale out rather than up. See, for example, Cloud Infrastructure for the real world.

Workload usage matters as well. A workload with low background usage and only infrequent spikes may require using a different type of cloud instance (EC2 in the case of Amazon) from a workload with more frequent spikes. Haff offered an example of how—just for a single instance—the difference between using an Amazon Web Services (AWS) medium and 2xlarge instance at 50 percent utilization over the course of a year would result in about about a 6x and $4,000 difference in cost. Multiply that by hundreds of instances as you might see in a typical production deployment and you get a sense for how important understanding your workloads can be.

Of course, using public clouds isn’t just about the economics. Some organizations choose to use public clouds to allow them to focus on core competencies—which may not include running data centers. It also allows them, or their investors, to avoid making capital outlays for server gear against an uncertain future.

Finally, Haff discussed some of the issues associated with compliance and governance associated with public clouds. In general, the issue isn’t so much security in the classic sense as about audit and data management. Of particular concern of late is regulatory regimes governing data placement and notifications. These differ widely by country and state and even the provider nationality can matter wherever the data may physically reside. (Regional providers are sometimes preferred as a result.)

Circle then discussed how to consume Red Hat products—including but not limited to Red Hat Enterprise Linux—on Red Hat Certified Cloud Providers.  There are currently about 75 CCPs. These are trusted destinations for customers who want to use public clouds as an integral element of a hybrid cloud implementation. They offer images certified by Red Hat, provide the same updates and patches that you get directly from Red Hat, and are backed by Red Hat Global Support Services.

You can use Red Hat products in public clouds through two basic mechanisms: on-demand and Cloud Access.

On-demand consumption is available in monthly and hourly consumption models. Some public cloud providers also have reserved instances for long-term workloads. You engage with the CCP for all support issues, backed by Red Hat Global Support Services and the CCP bills you for both resource consumption and Red Hat products. The CCP handles updates through their Red Hat Update Infrastructure.

You can think of this as “RHN for the Public Cloud” and it’s immediately available and transparent to you.  Certain CCP’s (currently AWS and Google Cloud Platform) also offer a “bring your own subscription” offering called Cloud Access. Cloud Access provides portability of some Red Hat subscriptions between on-premise and public clouds. You keep your direct relationship with Red Hat but consume on a public cloud. A new Cloud Access feature, just introduced on April 7, lets you import your own image using a cloud provider’s import tools rather than just using a standard image. In the case of Cloud Access, you will typically use Red Hat Satellite to manage updates for both on premise and CCP images.

The takeaways from this talk?

  • Develop an appropriate application architecture
  • Ensure data is portable: test, test, test!
  • Understand the legal and regulatory compliance requirements of your applications
  • Isolate workloads as needed in a public cloud
  • Choose a cloud provider that is trusted and certified
  • Do the ROI to determine the right consumption model
  • Ensure consistent update for your images to maintain application certifications
  • Enable hybrid cloud management, policy, and governance


More information


Event: Red Hat Summit 2014
Date: Tues. April 15, 2014
Type: Session
Track: Cloud readiness
Title: How to use read solutions in a public cloud
Speaker: Jane Circle (Red Hat), Gordon Haff (Red Hat)



[ RSS feed ]

Get every new post delivered to your Inbox.