Red Hat logo
Menu

Red Hat Summit Blog

Performance analysis and tuning of Red Hat Enterprise Linux

Friday, June 26th, 2015

Speed matters

In the demanding world of today’s IT environments, users expect rapid responses to requests and greater utilization of their computing platforms. In order to meet these requirements, Red Hat’s Performance Engineering Team devotes itself to analyzing these technology concerns, building automated optimization tools for Red Hat Enterprise Linux (RHEL) and creating useful tuning guides to help users customize configurations for their varied workloads.

In front of a packed room, John Shakshober and the Performance Engineering Team demonstrated RHEL’s performance analysis and tuning tools. With conversations covering cgroup controlled resource management, tuned performance profiles, realtime kernel principles, hugepage memory allocation, and even Non-Uniform Memory Access (NUMA) node balancing, there was something for everyone.

Of particular focus was NUMA node balancing and how it has evolved with RHEL over the years. NUMA itself refers to the assignment of blocks of memory to a specific microprocessor, allowing the processor fast, direct access to those local memory locations. This approach provides significant performance benefits, but obviously, these are best realized if effort is made to keep processes correctly aligned with their associated memory.

The Performance Engineering Team detailed how this is accomplished in both RHEL 6 and RHEL 7, explaining that in RHEL 6, this functionality is achieved with a daemon known as “numad”. The numad daemon automatically moves processes to CPU cores in the same NUMA node as their associated data, drastically improving overall system performance. In RHEL 7, this functionality has been built directly into the kernel and no longer requires an additional daemon to achieve the same performance benefits.

For more on NUMA node balancing, as well as the other topics covered in this session, please watch the video on our YouTube page!

Innovation in the large enterprise: Using Openstack, OpenShift, and automation to empower teams

Friday, June 26th, 2015

generic Summit session graphic

Innovation, culture, lean, agile, and DevOps. These terms are currently being thrown around in the IT industry as the apex of success. Master these, and your organization will be blessed with an eternity of deployment good fortune and increased efficiency. What does it really take for an organization to be considered agile and innovative in 2015?

Target: Moving from mode 1 to modern

Converting a portion of that large cost center known as enterprise IT away from the old mode 1 methodology is not only a competitive advantage but an essential business requirement in today’s environment. Jeffrey Einhorn, GM of RAPId Infrastructure Services for Target, explains exactly the methodology he championed to help his organization become more innovative, more efficient, and more stable. Many organizations believe having the right tools is the key to achieving this IT utopia, when in fact the true ingredient is being able to change the organization’s culture alongside these shiny new tools.

It is possible. An organization can take system provisioning time from 3 months to 30 minutes using OpenStack. It can spin up thousands of VMs per day, take the creation of development environments from 8 weeks to 15 minutes using an OpenShift pilot, and finally take internal system automation tooling development from 9 months to 10 minutes.

Core areas of focus

The Target team focused some core areas to achieve DevOps success and they are as follows:

  1. Culture. The most important of the newly minted principles Target follows. Getting the right people committed to helping the organization succeed. Culture is all about shared goals. Starting with flash builds, they gathered all responsible parties together in a room and proved that it was possible to deliver services that previously took 8 weeks in 1 day. Success led to the creation of small, 15-30 person teams that were given 30-day challenges to deliver service. Eventually this led to making these teams permanent.
  2. Ownership. Moving away from project-based work to a service ownership approach. Teams became responsible for a service rather than getting a project completed and owned the entire stack rather than individual technologies like a database. They were able to look at the complete stack and work on the right things within the stack.
  3. Empowerment. Empowerment began by using technologies such as OpenStack to abstract the backend. Open standards allowed them to program against the APIs, regardless of the underlying infrastructure. Programming against the OpenStack APIs prepared the development teams to easily adapt to programming against the public cloud APIs. All of this enabled the business to serve customers more quickly and efficiently.
  4. Feedback. Standardizing on agile for the feedback loop helped Target produce lots of small changes based on a lot of feedback. Using 2-week sprints allows for sharing, demoing, then re-planning. It brings the customer closer to the process to drive the requirements.
  5. Technology practices. Standardizing on a continuous integration/continuous deliver (CI/CD) pipeline of common open source components and tools that allowed re-usability. Changing the CI/CD pipeline only when necessary and sharing that across the business.

Other great points that Jeffrey made:

  • Having the support of management is crucial
  • DevOps days where collaboration, tools, plans and talks are done and shared builds stronger teams and promotes innovation
  • Developing cook books for everything and sharing and contributing the information online means everyone can stay informed

Their challenge and environment wasn’t unique, but their commitment was. This translated into success for them. As Jeff put it: “Be focused, be bold.” Kudos to Jeffrey, and congratulations on the incredible rewards of your hard work.

Puppet Enterprise and Red Hat Satellite 6

Friday, June 26th, 2015

generic Summit session graphic

At Red Hat we call it the Open Source Way. When we talk about open source, we’re talking about a proven way of collaborating to create technology. The freedom to see the code, to learn from it, to ask questions and offer improvements: This is the open source way.

That’s just what we saw today between the Puppet Enterprise teams and Red Hat Satellite Product teams. Carl Caum and Tim Zonca of Puppet Labs, along with Richard Jerrido and Christopher Wells of Red Hat, described their offerings and briefly demonstrated how they are collaborating on interoperability in order to bring the best experience they can to customers.

About Red Hat Satellite and Puppet Enterprise

Most of the room was already familiar with Red Hat Satellite and were current users. Chris did level set for those unfamiliar with Red Hat Satellite. He described Red Hat Satellite as Red Hat’s life-cycle management, which comprises provisioning, patching, subscription management of bare metal, virtual and public cloud environments.

Tim did the same for Puppet. In short, Puppet manages the configurations of your systems and what’s running on them. Ultimately, it allows for the automation of tasks, so less time is spent fighting fires so you have more time to be productive. Currently, Satellite provides Puppet as a component of Satellite 6.

So why would someone want to use a separate Puppet environment? Well, currently the Puppet included with Red Hat Satellite only supports Red Hat Enterprise Linux. Most organizations are quite heterogeneous and may need Puppet Enterprise support for UNIX distributions or Windows instances. Customers may want to use the enhanced tools that Puppet Enterprise offers. In the next few months, the teams will be announcing support for this integration.

Puppet-Labs-Sat-Slide-RH_Summit_Preso-800

How this integration works

So how would this integration work? Essentially, when a system is provisioned via Red Hat Satellite, a puppet agent is installed as a step in the kickstart post-provision process, handled by a reusable provisioning template called a finish script. This process is all handled by Foreman. A user would change this template to install the Puppet Enterprise agent and point it to the Puppet Enterprise puppet master instead of it being pointed at the Red Hat Satellite puppet master.

The Puppet Enterprise repositories are shared via Pulp on the Red Hat Satellite Server, which is the first point of integration. The user would then approve the agent request in the Puppet Enterprise puppet master. Every time the puppet agent runs, it pulls its configuration but also sends the report information and inventory data or system facts back to the puppet master. The integration will now send these reports and facts not only to Puppet Enterprise, but also to Red Hat Satellite using standard components of Puppet Enterprise called a Report Processor, and the Fact Terminus.

This integration was well received by the attendees and will likely be a much consumed feature. This exciting integration should be available to the public very soon!

Red Hat Learning Labs through the eyes of a solution architect

Friday, June 26th, 2015

This year, Red Hat solution architect David Huff helped out with Taste of Red Hat Training labs. Check out his experiences and get a glimpse of the Summit information sessions, hands-on labs, and self-paced learning through David’s eyes on his blog.

https://huffisland.wordpress.com/2015/06/25/taste-of-training-summit-2015/

Dell and Red Hat’s OpenStack journey to the enterprise 

Friday, June 26th, 2015

REDHAT20150622-0508

What do 2 leading IT companies do with a partnership that goes back 15 years? For Red Hat and Dell, the answer is to cultivate an integrated OpenStack solution specifically for enterprise private cloud deployment.

Dell’s Arkady Kanevsky, director of software development, and Randy Perryman, network and solution architect, joined Red Hat senior principal software engineer Steven Reichard to describe how the solution has come together over the last 18 months.

So why the partnership?

“We have 15+ years of joint experience working closely together to make things customers actually want, not just what we want to sell,” said Reichard. “This partnership uses the best hardware for OpenStack with Dell’s servers and switches and the best software for OpenStack, with Red Hat’s complete stack co-engineered to work together.”

Working closely together might be an understatement–Dell engineers were embedded within Red Hat for better collaboration from the start. Initially created as a reference architecture, the solution now has several deployments across multiple enterprise customers.

“We made the reference architecture as flexible as possible,” said Perryman, “So you can choose how many virtual machines you want, plus their sizes, tenants, data, and performance.” The solution is now on version 5, and deployment time has been reduced by half with each release.

Perryman walked the crowd through a demo of a highly available solution for Neutron (OpenStack’s networking project), inducing failure in a server to show how long it took to recover. The screen doesn’t lie: Once one went down, the solution load-balanced between the 2 controllers in seconds.

“OpenStack is constantly changing, so we don’t want to give you bleeding edge when you install. It’s stable from the start,” said Reichard.

Each release generally mirrors OpenStack’s 6-month release schedule, though Dell and Red Hat engineers take a little extra time to do some bug finding and fixing before shipping out the joint solution. The next release is coming soon and will be based on Red Hat Enterprise Linux OpenStack Platform 7, which is, in turn, based on Kilo.

Future versions will include more hardware options for each component (servers, network, storage), and will integrate more OpenStack and partner components “when we think they’re mature enough and ready,” according to Reichard.

Marco Bill-Peter’s customer support keynote, illustrated

Thursday, June 25th, 2015

bill-peter_keynote_800px
Illustration by Libby Levi.

Red Hat Training and Certification update

Thursday, June 25th, 2015

training-roadmap-REDHAT20150625-3499-800

Red Hat Global Learning Services leaders Ken Goetz, Pete Hnath, Randolph Russell took turns addressing the technology training market, highlighting new features, courses, and plans for Red Hat’s training and certification programs.

They also used this breakout session to announce a special standing discount for Red Hat Certified Architects.

Cloud and Linux skills are in demand

Ken began by outlining the opportunities, particularly in cloud technologies. He said that IDC predicts that 5 million cloud jobs will be left unfilled in 2015. “IT hiring managers report that the biggest reason they fail to fill open requisitions for cloud-related IT jobs is the candidates’ lack of sufficient training, experience, or certification,” he said.

Likewise, high demand for trained Linux professionals has resulted in a shortfall. 90% of IT managers plan to hire people with Linux skills within 6 months, Ken said.

DevOps, and next-generation architectures

“Philosophically, we don’t see DevOps as a tool–you won’t go out and buy the DevOps tool,” Ken said. “It’s not a person. You can’t hire the DevOps guy or gal. It’s a multi-role thing. Developers, QA, production–everyone must be on board. It’s a philosophy. A way of approaching things.”

From a course development standpoint, the same philosophy is true. Instead of teaching a tool, Red Hat must teach customers how to automate and orchestrate their systems, using a multidisciplinary approach. “DevOps doesn’t traditionally speak to a classroom environment, so our focus is on the delivery pipeline,” Ken said.

How our approach is expanding

Traditional instruction is still necessary, with product-focused coursework as the foundation for future DevOps learning. Product topics include:

  • Red Hat Satellite
  • Red Hat CloudForms
  • Red Hat Enterprise Linux OpenStack Platform
  • Red Hat JBoss Middleware

The next step forward is DevOps concept courses, which are in the planning stages. They are the middle ground between product-based training and pure conceptual instruction. Topics could include:

  • Agile development with OpenShift by Red Hat
  • OpenShift by Red Hat architecture and administration
  • Application deployment automation with Puppet

Pete Hnath indicated their stretch goal is to one day add pure, process-based courses. This could include topics like:

  1. Developing code for continuous delivery
  2. Continuous integration and quality assurance

New offerings

OpenStack CL210 is the most popular course, outside of the RHCE-track courses, so it’s no surprise that new OpenStack offerings are now available. “This is something we want to be the leaders in,” Randy Russell said. “The training path for this popular technology will mimic the popular Red Hat Certified Engineer track with 3 levels, advanced exams, and upper-level content for solutions like Red Hat Ceph Storage. Offerings will be updated to add instruction on new features of Red Hat Enterprise Linux OpenStack Platform 6.

Pete returned to highlight 2 new courses:

Managing containers with Red Hat Enterprise Linux Atomic Host (RH270)
This 3-day course teaches students to manage containerized apps using Dockerfile container orchestration with Kubernetes.

Building Advanced Red Hat Enterprise Applications (JB501)
This course is for advanced middleware developers, and focuses primarily on a front- and back-office integration case study, using multiple middleware products, including:

  • Red Hat JBoss Enterprise Application Platform
  • Red Hat JBoss Data Grid
  • Red Hat JBoss BPM Suite
  • Red Hat JBoss BRMS
  • Red Hat JBoss Fuse
  • Red Hat JBoss Data Virtualization

Making training easier to understand

Other challenges (and changing student demographics) affect how Red Hat training courses are delivered. Millennials and more tech-savvy audiences may prefer online, on-demand, self-directed learning over traditional classroom instruction. Other factors include:

  • New or inexperienced staff may need additional skills
  • Continuous release cycles require continuous learning
  • Time
  • Cross-training because of increased solution integration
  • Post-course support for students after initial learning
  • Global coverage for teams that are spread across the globe
  • Cost-effective training, even for highly desired skills

To solve these challenges, and offer the kind of self-directed learning many students are asking for, Red Hat is introducing Red Hat Learning Subscription. A learning subscription includes unlimited access to all of Red Hat’s online courses–content, videos, and labs.

IMG_20150625_112515-600

Red Hat Learning Subscription includes platform, cloud, and middleware tracks. Benefits of the subscription include:

  • All content–including transcripts–is indexed and searchable. (It’s XML-based.)
  • Students get the same user experience as in the classroom
  • Full-length, instructor-led, HD video courses
  • Get new course content as it’s added, as long as you have an active subscription
  • Available on 5 continents in 9 languages
  • Access to dozens of courses for the price of 2

Standard and premium subscribers get additional benefits, including:

  • Access to expert seminars given by Red Hat thought leaders
  • Access to subscriber support with live online chat

The team saved today’s big announcement for last: Red Hat Certified Architects who maintain their credentials get a 50% discount when purchasing their Red Hat Learning Subscription.

Randy said, “[RHCA] is a huge investment of time, investment, money. Once you have attained that goal, we want to keep you. We think it’s good for you, your career, your employer—and it’s good for Red Hat.”

“The world is going to need a lot more architects. And this is custom made for that particular job role.”

Enterprise Containers 101 with Red Hat platform architect, Langdon White

Thursday, June 25th, 2015

REDHAT20150625-3921-L

This was a full session with a line out the door just minutes before it was schedule to start. A majority of the attendees self-identified as System Administrators. A handful were developers. Almost none had experience with containers.

Red Hat platform architect, Langdon White, has experience as a back-end developer, and the session was intended to walk the crowd through exactly how he built a container–and why. He listed the steps he goes through to build applications:

  • First, a VM runs applications
  • Build a “monolith” (take what’s on a VM and put it in a container)
  • Create a set of containers
  • Finally, a “nulecule”

When Langdon was building a container, he used Drupal running on a single VM, and also looked into Software Collections to help with installing more than 1 version of software on a single machine (He used mariadb, php, and nginx.)

Langdon put it all in a container, “because you have this nice ‘blob’ that you completely control, but it’s not as heavyweight as a VM. There’s a shared kernel, but is also independent (can run different versions).”

He advocated for the Red Hat developer program website and using them to learn about creating containers. Specifically, there’s a lot of info on using Vagrant to launch containers.

Next, Langdon talked about when to create a separate container. You can break them apart based on different microservices you may want to live on their own. Since you’re testing your containers at this point, you don’t want to run it in production. Kubernetes can help keep track of your containers. (Langdon said set up 2 containers as 2 different pods.)

“Nulecule” comes in when you want to launch OpenStack from Docker containers.

He had a few lessons he’s learned by doing:

  1. When you’re doing this kind of work, just get the “blob” out, and shave off different services and think about how you can package them in a nulecule and share them across the organization.
  2. If you’ve worked with Docker containers, you may start with architecture and orchestration, the build the containers. He recommends doing the opposite, so you’re not building documentation for documentation’s sake.
  3. Visit the Red Hat Customer Portal and look at Linter for Dockerfile. Linter can tell you whether you built your files correctly. When using Dockerfile, White said to imaging each line has a re-boot afterwards, so do things on the same line when needed.

Langdon ended the talk with a sort of lightening round of tips and tricks:

  • The stuff you don’t change much should stay near the top.
  • Minimize the number of layers before production. When you get closer to deployment, clean up the code.
  • The “offline first” movement for mobile apps is interesting. Everyone loves applications that work offline. Depending on bandwidth, start adding features. Think about that related to datacenter service. Can you operate at error modes at a lesser quality of service?
  • OpenShift 3 by Red Hat is just another Kubernetes. It’s seamless. This “blob” is made of a few containers, orchestration built in via atomic. You can push it on to OpenShift, It should be seamless.
  • I don’t know that containerizing databases is a good idea. Most of the time, there is some big iron thing that is all of your data. There are use cases such as old-school data marts where the data is ephemeral, though, and that might
  • Sys admins in the audience said that developers haven’t taken on containers yet. How are containers managed in orgs? Who owns the containers? Langdon says that today, everybody does every thing. One of the huge benefits to go towards DevOps, is that you really can say who owns what. This is why he likes OpenShift and Software Collections. He knows where things stop and start. Developers know what they’re going to get paged about. Operators know what they’re going to get paged about.
  • He said he’d like to see all Red Hat products ship in a container.

Search

Subscribe

[ RSS feed ]
Follow

Get every new post delivered to your Inbox.

Join 29 other followers