Managing High Performance Workloads

Published: Oct. 2, 2017, 4 a.m.

b'

Show: 8

Show Overview: Brian and Tyler talk with Jeremy Eder (@jeremyeder, Senior Principal Software Engineer at Red Hat) about the Kubernetes Resource Management Working Group, scaling Kubernetes environments, extending Kubernetes for high-performance workloads (HPC, HFT, Animation, GPUs, etc.), testing at scale and how companies can get involved.
\\xa0
Show Notes:

Topic 1 - Welcome to the show. You recently introduced the Resource Management Working Group within Kubernetes. Tell us a little bit about the group.

Topic 2
- The group\\u2019s prioritized list of features for increasing workload coverage on Kubernetes enumerated in the charter of the Resource Management Working group includes (below). Let\\u2019s talk about some of the types of use-cases you\\u2019re hearing that drive these priorities.

  • Support for performance sensitive workloads (exclusive cores, cpu pinning strategies, NUMA)\\xa0
  • Integrating new hardware devices (GPUs, FPGAs, Infiniband, etc.) \\xa0
  • Improving resource isolation (local storage, hugepages, caches, etc.) \\xa0
  • Improving Quality of Service (performance SLOs)\\xa0
  • Performance benchmarking\\xa0
  • APIs and extensions related to the features mentioned above\\xa0

Topic 3 - This is a broad list of areas to focus on. How do you determine what things should be kernel-level focus, Kubernetes-level focus, or application-level focus?
\\xa0
Topic 4 - How do you go about testing these areas? Are there lab environments available? How will you publish methodologies and results?
\\xa0
Topic 5 - As you talk to different companies, do you feel like they are holding back on deploying higher-performance applications on Kubernetes now, or they are looking for more optimizations?

Feedback?

'