Red Hat logo


Live from the Summit: Steering wheels and easy buttons for deploying OpenStack with high availability

Wednesday, April 16th, 2014

RHELOSP-HA-sessionWith an opening slide of a Formula 1 racer, Red Hat’s Arthur Berezin, senior technical product manager for virtualization, drew parallels between the sport and deploying OpenStack.

Driving a car without your steering wheel is something you obviously don’t want to do,” Berezin said, “and sometimes it feels like you’re doing that with OpenStack—you’re the driver, and you need a way to control your deployment.”

High availability means “100% 99.999% uptime and making sure everything runs consistently on high scale,” he said, before introducing a new, easy way to deploy OpenStack and ensure high availability.

Berezin demonstrated upstream features in the RDO community that will be coming downstream to Red Hat Enterprise Linux OpenStack Platform soon: Foreman and Staypuft.

Foreman is an open source system for managing configurations, provisioning, and monitoring on multiple cloud providers including OpenStack. It’s a steering wheel of sorts, while Staypuft is “an easy button,” as Berezin described it, for installing Foreman.


Berezin walked through OpenStack examples with Horizon, Cinder, and Network, explaining what features the RDO community uses now to ensure high availability for services, database, and messaging:

  • Services
    • Pacemaker cluster
    • HAProxy load balancer
  • Database
    • Galera DB replication
  • Messaging
    • RabbitMQ mirrored queues

Choose your options, hit deploy, and you’ve got a highly available environment,” said Berezin.

Staypuft puts you in control with dead-simple features to reduce deployment time and make life a little easier.

With a single selection button, you control your nodes and you choose the services you want,” Berezin said. “It’s that easy.”


More information


Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Cloud deep dive
Technical difficulty: 3
Title: Red Hat Enterprise Linux OpenStack Platform high availability
Speaker: Arthur Berezin

Live from the Summit: Reset expectations for building enterprise clouds

Wednesday, April 16th, 2014


As a former analyst, Alessandro Perilli has found a way to see through the promises of cloud to the real-world problems every enterprise encounters when building a private cloud.

When we think about cloud, he says, we have certain expectations:

  • Simple
  • Cheap-ish
  • Self service
  • Standardize and fully automated
  • Elastic (application level)
  • Resource consumption-based
  • Infinitely scalable (infrastructure level)
  • Minimally governed

But what we actually get is dramatically different. (If you need a visual, Perilli compared our current expectations of private cloud to a winged Pegasus, and our actual progress on private cloud to a donkey to drive the point home.)

The differences between private enterprise clouds and public are enormous. Compare the list below to our expectations, above:

  • The (enterprise) private cloud
  • Undeniably complex
  • Expensive
  • Self-service (The only similarity to the above list)
  • Partially standardized and barely automated
  • All but elastic
  • Mostly unmetered
  • Capacity constrained
  • Heavily governed

As an ex-Gartner analyst, Perilli shared what he called a (very mature) private cloud reference architecture. It includes many layers (5) plus self-service provisioning, chargeback, and self-service provisioning on top of a physical infrastructure. Several vendors (including Red Hat) can sell all of these components.

So what actually makes up a cloud? Do you need all those layers? And if you don’t have them today, when will you get them?

Gartner identified 5 phases to get you from a traditional datacenter to a public cloud. Getting from phase 1 (Technologically proficient) to stage 2 (operationally ready) can take years. That’s why Perilli recommends you take a “dev & test” approach to private clouds. “Very few private clouds are in production. There are a few industries where we’ve seen them successful.”


“It’s easier said than done” to be fully standardized and automated. There’s a progression of complexity as you try to provision applications. The more components an app has, the more provisioning is required. The more pieces you try to automate, the harder your job will be. You also have legacy systems that need to be integrated. And when you design applications in the physical world, provisioning isn’t always well documented and, therefore, not repeatable. And, truth be told, it may take you 1 year to convert 10 applications (of the thousands in the enterprise).


What does “cloud application” mean anyway, Perilli asked. The application should have cloud characteristics like rapid scaling and elasticity, self-service, etc. And it should be architected to be automated, failure aware, parallelizable, etc. And, Perilli, noted that most developers and applications don’t fit the cloud model. He says it’s a matter of culture and training. And most organizations, except for Netflix, don’t have everything it takes to do it right. “Enterprises take a long, long, long time to get there,” Perilli said.


Because we’re too busy with provisioning and orchestration issues, charge-back capabilities are taking a back seat. There are resources and licenses to manage, and if you aren’t careful, you’ll miss the charge-back opportunity.


There’s a massive need for governance, but customers aren’t successful in production clouds. You need an approval workflow, and there’s a need for applications to be removed when not needed. We need process governance, but aren’t at the maturity level to provide it.

So what do we do? Stay away from private cloud altogether? Nope.


Speed to market and quality. Huge demand for Platform-as-a-Service (PaaS) bundles. Perilli says customers have shared these kinds of successes with him. The key, according to Perilli, is to buy only the cloud you need. Some providers sell you more than you need, and you end up using only 1/12th of their modules.

More tips from Perilli:

  • Don’t believe the promise of fully automated production cloud. Build a test cloud instead and be pragmatic.
  • Introduce support for scale out application in a meaningful way. Consider a multi-tier cloud architecture.

As a new Red Hatter, Perilli provided a quick overview of the capabilities of Red Hat’s cloud architecture. In particular, he pointed out CloudForms as a way to manage disparate systems under a single entity. And instead of buying more than you need, Red Hat plans to rely on certified ISVs to provide more management capabilities on top of our cloud. It’s a lean, smart approach, and one we’ll hear more of in the months to come.


More information


Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Red Hat Cloud
Technical Difficulty: 1
Title: Building enterprise clouds: Resetting expectations to get them right
Speaker: Alessandro Perilli

Live from the Summit: Getting technology across the innovation gap

Wednesday, April 16th, 2014


Brian Stevens, executive vice president and chief technology officer at Red Hat, brought emerging technologies to the main stage in his Wednesday morning keynote. His multi-part, video-laced talk touched on topics that ranged from development practices to services-based deployment.

“It’s no longer about one singular, large transformation shift. It’s not ‘what’s the next big thing’–it’s how we take community-developed technology and get it across the gap to enterprise IT operations. It’s about countless small, incremental changes.”

Stevens broke his talk into chapters, taking a good look at five areas of intense interest today. Each chapter included a short video from influential experts in each area.


In the past, good development meant supporting thousands of types of hardware. Today that also includes hypervisors. (That’s a lot of stuff.)

RPM used to be the tool of choice for building hardware from source, verifying packages, building application repositories, installing and uninstalling packages, and managing dependencies. RPM was useful, and it helped Red Hat Linux grow in popularity the way that it did.  RPM was ported to other to other OSes and is still in widespread use today–17 years later. We continue trying to jam more things into RPM, even though some are not a good fit.

Then came dotCloud with Docker. In the Docker model, applications are layers in a filesystem. You can use a Docker application as-is, or add and remove layers to fit your needs. Developers can build the apps they like within Docker, without having to worry about where the app is going to run.

“The amount of things you can build with Docker is mind-blowing–even to us,” said Solomon Hykes, dotCloud’s CTO.

> Watch the video: Solomon Hykes (dotCloud CTO)  and Ben Golub (dotCloud CEO)  talk about Docker and Red Hat.


It’s been 8 years since people first started talking about cloud, and we are now 3 or 4 years into serious OpenShift and OpenStack (or PaaS and IaaS) development. There are benefits of these technologies separately, but as they become better integrated the benefits grow. For example, shared monitoring tools and APIs can help admins monitor and maintain applications all the way down through the hardware. And Red Hat teams are very involved in OpenStack, and interested in integrating OpenStack technologies with our solutions.

> Watch the video: Clayton Coleman from OpenShift talks about…


With traditional IT, the more stuff we do, the more systems we need. The more systems we need, the more admins we hire. Data and app growth is making old-fashioned system administration unsustainable. Today’s admins are using software to orchestrate and automate the IT environment. There’s one catch: Existing IT function must be modular so that services can be orchestrated.

“Virtualization gets us part of the way there,” Stevens said. Starting a virtual machine and configuring it using scripts and tools is common, and software-based storage is getting there.  “Red Hat Storage–or Gluster for Red Hat–was designed with an API in mind so IT and apps can scale out,” said Stevens. “The network has always been the bottleneck.”

The ability to flexibly deploy tiers of applications and data is coming, and projects like OpenDaylight, OpenStack’s Neutron, and Open vSwitch (OVS) are advancing software-defined networking efforts.

> Watch the video: Chris Wright from Red Hat talks about connecting OpenDaylight to OpenStack.


Another way to automate massive processes–another theme of Stevens’ talk–is by changing the workflow model developers use. We’ve seen movement away from the 1950′s style waterfall engineering to more agile technologies that can deal with moving targets and the expense of mistakes. “We live in a world of user experience, collaboration, agility, and change, said Stevens. “Conflicting requirements and magnified, massive-scale projects shared by so many [mean that the] legacy model can’t survive.”

Continuous integration and continuous development (CI/CD) provides a stream of updates and new technology, with the end result being, as Stevens said, “a running system every day.” For Red Hat, one of the best benefactors of a system of incremental improvement is Red Hat Enterprise Linux.” Stevens said, “Red Hat Enterprise Linux 7 will be the first release to experience the value of continuous integration and continuous development.”

> Watch the video: Mark McLoughlin, a leading contributor to OpenStack and a Red Hat engineer, talks about OpenStack’s TripleO and what it contributes to CI/CD.

Stevens admitted that keeping up with OpenStack can be challenging. 6-month cycles (or “mini-waterfalls”) can make it difficult to see what the result will be until the end. CI/CD can help with this, letting both upstream maintainers and downstream organizations accept and reject change as it happens.


Traditional storage of data is expensive, leaving many businesses unable to store everything. Instead, they must pick and choose what is retained or store data in different places. Loss of data–and data that is stored in silos–can limit business intelligence capability.

Red Hat Storage, based on Gluster, helps solve this problem from both sides. It can store, scale, and secure petabytes of data and provide analytics so users can find what they need. Tools from other communities (like MapReduce from Hadoop) are integrated–again, taking advantage of the standardization and flexibility of open source development.

> Watch the video: Steve Watt from the Red Hat Storage team, talks about Sahara (formerly Savanna), Hadoop, and other storage advances.

Red Hat has contributors involved in all of these communities and projects–and many more. “Innovation is not one big massive invention–it’s a series of smaller micro inventions.” said Stevens. And over time, with collaboration and integration, these chapters come together to help smooth the path from raw, new ideas to innovation ready for the enterprise.

Each piece, and each person that contributes, is part of that journey.


Event: Red Hat Summit 2014
Date: Wed, April 16, 2014
Type: Keynote
Title: Bridging the gap between community and the enterprise
Speaker: Brian Stevens (Red Hat)

Live from the Summit: Build scalable infrastructure with Red Hat Enterprise Linux OpenStack Platform

Wednesday, April 16th, 2014


Will Foster, Kambiz Aghaiepour, and Dan Radez–all senior engineers with Red Hat–wanted more automation in their environment. “What we did not want to do was be in the business of manually managing the building of [OpenStack] clusters,” Aghaiepour said. That’s a common problem for enterprises–or anyone–thinking about performance benchmarking and scalability testing on OpenStack. But it was also an opportunity.

Before too long, they had 9 racks with 200 baremetal nodes running Red Hat Enterprise Linux OpenStack Platform 4 (based on Havana) on Red Hat Enterprise Linux 6.1. They used Foreman 1.5 with Red Hat Satellite 6 for node provisioning and deployment. Other tools or technologies used included:

  • OpenFlow 1.1
  • Hostgroups
  • Nova Compute
  • Neutron networking
  • Controller
  • OpenStack storage (GlusterFS)
  • Puppetmaster
  • Staypuft (OpenStack Foreman installer)

Recommended best practices were around utility services and configuration management. For services, the group administered Puppet, PXE, DHCP, and DNS through Foreman, which keeps things in one place and eases administrative sprawl. For config management, they used Puppetmaster through Foreman and distributed revision control systems. “Anything we do once or twice, we never want to do again. We automate it,” said Foster.

Foster also recommended doing as much as you can through Kickstart %post. Foreman, he said, makes this easy. They also used Linux software RAID. Foster noted that most modern CPUs can handle RAID overhead. His final bit of systems design advice: Keep it simple. Use shared storage for important stuff.  Nodes should be a commodity–it should be faster to spin up a new one than to fix an old one.

Aghaiepour demoed the automated environment–showing off to the packed room how quickly (and easily) a 70-node cluster could be erased, redefined, reprovisioned, and deployed. The whole process took less than 10 minutes. He also demonstrated how the instance exported data to a calendaring tool. This calendar keeps track of the node types that are going to be present in the cluster, so that engineers can figure out when they should plan to test applications or services that require a particular node type.

Want to try setting up a scalable infrastructure of your own?

> Get the provisioning demo tools and scripts from GitHub.

This kind of tooling and DevOps work is intended to help engineers get their jobs done.  IT no longer has to provision servers, or even set up VMs–with enough careful planning and the right automation tools, users can spin up and take down their own instances. And the instances themselves can be responsive to the intended use with proper APIs, groups, and settings.  No new hardware needed.

More information


Event: Red Hat Summit 2014
Date: 10:40 a.m., Wed April 16, 2014
Type: Session
Track: Application and platform infrastructure
Technical difficulty: 3
Title: Building scalable cloud infrastructure using RHEL-OSP
Will Foster (sr systems engineer, Red Hat), Kambiz Aghaiepour (principal software engineer, Red Hat), Dan Radez (senior software engineer, Red Hat)

Executive Exchange: Leading in the era of hyper connectivity with Thomas Koulopoulus

Wednesday, April 16th, 2014

header_text_exec_1170“Expunge IT from our vocabulary. It is not separate from the business,” said author and Red Hat Executive Exchange speaker, Thomas Koulopoulus. “It is the business, damn it.”

That’s the way Koulopoulus ended his talk with executives attending the one-day conference in San Francisco on Tuesday. Everything that came before built the case for us to completely rethink our approach to IT.

He doesn’t subscribe to the notion that everything that can be invented has been invented. “We’re not even close,” Koulopoulus said. “But we behave as though the best stuff has already been invented.” And we create generational chasms to help justify why we aren’t keeping up. Generation X. Millennials. Baby boomers. He believes these titles were created as a way for us to excuse ourselves from adapting to a rapidly changing world.


So, if you had to choose a word that best defines our ability to reshape society, technology, and business over the past 200 years, war would it be? Attendees guessed “information,” “democracy,” “capitalism,” and the Internet. And what is the word that defines the greatest CHALLENGE to innovation? Guesses included ubiquity, focus, communication, intelligence. Koulopoulus suggested that 1 word works for both: connections.

“This is what makes us different that Socrates and Plato,” he said. “[Connections] are fundamentally what has changed the human experience more than anything else.” So how are we connecting, and how will our connections change us in the future? No one can predict the future (except sci-fi writers), he said. (See this AT&T ad for proof.) But the velocity at which we’re creating new technology, and the sheer amount of people on the planet (who are living longer) means we are on a path of imminent change.

How are we connecting?

  • In 1800, the global population reached 1 billion people for the first time
  • We are projected to have 10 billion people at 2080
  • It is estimated that by 2020 we will have 2.8 trillion machine (or computer-based) connections

The confluence of machine, data, and human connections is creating a new form of intelligence. Cloud is becoming an intelligent organism. And we’re surrounded by sensors in our cars, home, in stores, and in cities. A virtual tsunami of information is coming at us. “The number of grains of sand in the world is less than 1% of the data we will have in 2100,” Koulopoulus said.


It’s only natural that we have a hard time processing big numbers like this. And in his upcoming book <<>>, The Gen Z Effect: How the Hyperconnected Generation is Changing Business Forever, he explores how younger folks will look at business entirely differently than we do. For example, to them, IT isn’t a separate department from the business—it IS the business. Kids don’t get “aloneness,” he said. They are always connected to their friends in different ways. As his son told him when Koulopoulos told him to go outside and play instead of playing video games: “Dad, this [game] is my cup-de-sac.”

But blaming behaviors on a title like “millennial” is a mistake. “You have a set of behaviors that define what it means to be a part of a new society,” he said. “If you do adopt these behaviors then you become a functioning member of the new society. If you don’t, you’re disconnected. ‘Gen Z’ is just a set of behaviors we decide to take on.” If you’ve read Sherry Turkle’s book, Alone Together: Why we expect more from technology and less from each other, you know that our interactions with technology are at an early stage, and we have control over our future—if we choose to take it.


So how to we boldly go into the future knowing what we know is waiting for is this chaotic and overwhelming? Koulopoulus suggests that we can’t try and go directly into the future, rather we can take the techie path—a twisty, windy path with many diversions and turns. Or we can skip some steps like our grandparents did with iPads. They didn’t dabble on Commodore computers with us. They used typewriters, then slingshotted to an iPad and they’re texting us.

The connections we talked about earlier are shaping tomorrow’s trends. “We will invent highly personalized communities that are hyper local and hyper global at the same time,” he said. And we’ll trade on behavior. We’ll have deep, predictive knowledge of immensely complex systems.

Another prediction is that transparency will apply to business and government. Transparency creates an understanding of behavior, and so do technology and data. So the threat is to live with transparency while providing security. Your reaction times are shortened, and the amount of data you have access to is radically bigger. “The threat is never where you look for it,” he said.


Those in IT and marketing are swimming in the rip currents, Koulopoulos said. And the frustration we feel as CIOs and leaders of IT is that we’re treading water. Just don’t get stuck

“You can’t use the patterns of the past to navigate the future,” he said. “Our role is as leaders—not just a catalyst. Business people won’t truly understand the power available or the quagmire. You are the leaders. Your job is to get a seat at the table. If you can’t, you’ll get commoditized.”

Find any model and any business where the innovation hasn’t been foundational for that industry and has been built and supported on the bedrock of IT.

“You are the innovators,” he said. “If you’re still thinking, ‘But I’m not,’ that has to be your mission. If you don’t do it, data scientists will. They are not IT they’re business folks. That’s a huge threat.”

  • Move from product ownership to strategy and service organization
  • We are missing the predictive view. Operations looks at dashboards, and business analysts look through the rear-view mirror, he said. So who’s looking through the windshield? It should do that. Change the way things are done and choose the behaviors you want to adopt.
  • What is IT? That’s the question.

As for Koulopoulos, “I look forward to that day when I say, ‘I want to get off the train.’ The world will look different to me then. Completely unrecognizable.”

More about the Executive Exchange


Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014

Live from the Summit: Using Red Hat products in public clouds

Wednesday, April 16th, 2014


When you’re looking to run your Red Hat-based applications in a public cloud—almost always as part of a hybrid cloud deployment—there are two broad aspects to consider. The first is the overall economics and suitability of public clouds for a specific workload. The second is the specific Red Hat offerings available through the Certified Cloud Provider (CCP) program. Those were the topics covered by Red Hat’s Gordon Haff and Jane Circle in their “How to use Red Hat solutions in a public cloud” presentation.

Haff focused on general considerations associated with using public clouds. Consider the nature of your workloads. Public clouds (and indeed private clouds on infrastructure such as Red Hat Enterprise Linux OpenStack Platform) are optimally matched with workloads that are stateless, latency insensitive, and that scale out rather than up. See, for example, Cloud Infrastructure for the real world.

Workload usage matters as well. A workload with low background usage and only infrequent spikes may require using a different type of cloud instance (EC2 in the case of Amazon) from a workload with more frequent spikes. Haff offered an example of how—just for a single instance—the difference between using an Amazon Web Services (AWS) medium and 2xlarge instance at 50 percent utilization over the course of a year would result in about about a 6x and $4,000 difference in cost. Multiply that by hundreds of instances as you might see in a typical production deployment and you get a sense for how important understanding your workloads can be.

Of course, using public clouds isn’t just about the economics. Some organizations choose to use public clouds to allow them to focus on core competencies—which may not include running data centers. It also allows them, or their investors, to avoid making capital outlays for server gear against an uncertain future.

Finally, Haff discussed some of the issues associated with compliance and governance associated with public clouds. In general, the issue isn’t so much security in the classic sense as about audit and data management. Of particular concern of late is regulatory regimes governing data placement and notifications. These differ widely by country and state and even the provider nationality can matter wherever the data may physically reside. (Regional providers are sometimes preferred as a result.)

Circle then discussed how to consume Red Hat products—including but not limited to Red Hat Enterprise Linux—on Red Hat Certified Cloud Providers.  There are currently about 75 CCPs. These are trusted destinations for customers who want to use public clouds as an integral element of a hybrid cloud implementation. They offer images certified by Red Hat, provide the same updates and patches that you get directly from Red Hat, and are backed by Red Hat Global Support Services.

You can use Red Hat products in public clouds through two basic mechanisms: on-demand and Cloud Access.

On-demand consumption is available in monthly and hourly consumption models. Some public cloud providers also have reserved instances for long-term workloads. You engage with the CCP for all support issues, backed by Red Hat Global Support Services and the CCP bills you for both resource consumption and Red Hat products. The CCP handles updates through their Red Hat Update Infrastructure.

You can think of this as “RHN for the Public Cloud” and it’s immediately available and transparent to you.  Certain CCP’s (currently AWS and Google Cloud Platform) also offer a “bring your own subscription” offering called Cloud Access. Cloud Access provides portability of some Red Hat subscriptions between on-premise and public clouds. You keep your direct relationship with Red Hat but consume on a public cloud. A new Cloud Access feature, just introduced on April 7, lets you import your own image using a cloud provider’s import tools rather than just using a standard image. In the case of Cloud Access, you will typically use Red Hat Satellite to manage updates for both on premise and CCP images.

The takeaways from this talk?

  • Develop an appropriate application architecture
  • Ensure data is portable: test, test, test!
  • Understand the legal and regulatory compliance requirements of your applications
  • Isolate workloads as needed in a public cloud
  • Choose a cloud provider that is trusted and certified
  • Do the ROI to determine the right consumption model
  • Ensure consistent update for your images to maintain application certifications
  • Enable hybrid cloud management, policy, and governance


More information


Event: Red Hat Summit 2014
Date: Tues. April 15, 2014
Type: Session
Track: Cloud readiness
Title: How to use read solutions in a public cloud
Speaker: Jane Circle (Red Hat), Gordon Haff (Red Hat)

Party tonight! Sponsored by IBM

Wednesday, April 16th, 2014


It’s been a great week at our 10th annual Red Hat Summit, but we’re not done yet! After the 2nd full day of keynotes and breakout sessions, we’re excited to partner with IBM host all attendees at San Francisco’s Exploratorium.
> Learn more about the Exploratorium before you get to the party.

True to the open source way, we’re looking to YOU, Summit attendees, for feedback and participation. Four artists from Berklee School of Music are battling for your vote tonight in Summit’s first ever Battle of the Bands. Performers will include:

  • Emily Elbert
  • Salt Cathedral
  • Sierra Hull
  • Shea Rose

After the party, we’ll kick off the annual pub crawl. 6 bars are included in the walk back to Moscone South (map below!), so visit one or visit them all.


Monitoring OpenShift performance metrics with Datadog

Wednesday, April 16th, 2014

Visibility into the health of your OpenShift gears just got easier. Datadog is pleased to announce the release of the Datadog cartridge for OpenShift, which collects metrics from OpenShift gears to allow for monitoring, graphing and alerting within Datadog. Importantly, this cartridge supports both scalable and non-scalable OpenShift deployments and in the case of scalable applications running on OpenShift, the Datadog Agent will be automatically installed and configured on every new gear that is auto-deployed.

Visualize OpenShift metrics and events
Immediately after the Datadog agent is installed on an OpenShift gear, CPU, I/O Wait, server status and system load metrics will begin to flow into Datadog and be available for graphing. Additionally, the metric agents will inherit “tags” from Chef, Puppet, or any other configuration management systems, and any additional custom attributes as defined by the user. These tags are then available to deep dive into certain areas of an environment.

Once the Datadog agent is deployed, these metrics will be available for graphing and events will flow into Datadog’s event stream so that users can follow the life of their OpenShift gears chronologically.

OpenShift metrics alerting
OpenShift users will be able to set up alerting on their OpenShift metrics as soon as the Datadog agent is installed on OpenShift gears. Of particular note, users will be able to alert on specific gears, or, on aggregated metrics from multiple gears working in unison to provide a service.

In addition to generating events in your stream, alerts can be set to go through email, HipChat, PagerDuty, or a number of other messaging services.

See all OpenShift gears and other servers on one page
Datadog collects data from OpenShift gears and any other servers running on-premise or on other cloud environments. These can all be seen on the same page and then filtered through based on metrics or tags related to these servers.

Correlating events from other systems with OpenShift Metrics
Datadog will not only collect data from OpenShift, but also, the metrics and events from all of the systems in your environment or custom applications via API ().

Users can then overlay the events from other systems over the collected OpenShift metrics to determine whether another system or piece of infrastructure may have impacted OpenShift gears.


Learn more about Datadog & OpenShift in the Datadog booth at Red Hat Summit today from 10 a.m. until 2 p.m.


Sign Up for 2014 Event Updates


[ RSS feed ]