In my job as a Consulting Architect working with Red Hat’s Open Innovation Labs, I am in a very fortunate position where I get to help kick start new product teams. We use a range of open practices to help bootstrap these teams and get them off to the best start. One practice, in particular, I’ve come to realise is foundational for getting the team off to a great start: Social Contracts.
Machine Learning is a tool that is quickly becoming more and more available to enterprises. Although this new tool is very powerful, it is still often not well understood. This blog post intends to demystify the concepts around Machine Learning, define much of the vernacular common to the practice, and inform how Red Hat teams can help today. This post extends the information provided in our whiteboarding video.
In recent years, application development has started to focus on a new concept: containers. No, these aren’t shipping containers, however, most people working in a tech field have heard the term ‘container’ come up in some technical design meeting or discussing ‘the future of technology.’ It has quickly become a buzzword and an important concept, but what actually is a container? What is all the excitement about? Why should we care?
Check out our YouTube video: Container fundamentals, security and usage in the enterprise.
How Ansible can help to automate Virtual Network Functions (VNF) configuration deployment?
Check out our whiteboarding video here!
Nowadays, the reality of companies demands even more integration between different technologies, working together, changing and processing data. In this ecosystem, the use of an integration platform is essential. Still, the use of an in-memory database is necessary for a significant gain in information processing performance. In this article, we will demonstrate the use of JBoss Fuse integration platform, in conjunction with the In-memory database named Red Hat Data Grid.
Automation is everywhere. With the speed of modern business continuously accelerating, companies must deliver better products to market faster. But extended release cycles, error-prone releases, and unauthorized shadow IT make keeping up with market demands difficult. In this new digital landscape, automation has become a critical part of DevOps ability to enact and enforce best practices, and take advantage of appropriate skills that can support IT and help businesses overcome these challenges.
This post was originally published on https://dev.to/tylerauerbeck.
Traditionally there have been very clear battle lines drawn for application and infrastructure deployment. When you need to run a Virtual Machine, you run it on your virtualization platform (Openstack, VMWare, etc.) and when you need to run a container workload, you run it on your container platform (Kubernetes). But when you’re deploying your application, do you really care where it runs? Or do you just care that it runs somewhere?
This is where I entered this discussion and I quickly realized that in most cases, I really didn’t care. What I knew was that I needed to have the things I required to build my application or run my training. I also knew that if I could avoid having to manage multiple sets of automation — that would be an even bigger benefit. So if I could have both running within a single platform, I was absolutely on board to give it a shot.