At work the drive is for simplification, but simplification from a "stakeholder" perspective. The six sigma customer is now a stakeholder in the new paradigm.
I'm not complaining, it is easier to control good processes, or in other words good processes are those that stakeholders follow, requiring fewer external controls.
So for quality outcomes, it's very important that stakeholders be able to navigate a process effectively.
With the MBA-speak out of the way, this simplification drive results in greatly increased complexity on the back end (in HPC anyway).
As a researcher, I have a simulation job that I need to submit to our compute resources. The job requires a certain number of processors to execute optimally. It also requires a large amount of very fast scratch storage during execution, and a smaller amount of stable long term storage on which to store the inputs and results.
What I do today is go to the web-based job submission portal, select the application, tell it the name and location of the input files, and how many processors I need.
This is simple! I have no need to be aware of the underlying hardware architecture, or manage the location of the data, since the middleware does that all for me!
But I'm not a researcher. I develop and manage that middleware, and it's complicated.
First, we have to take that one number of processors requested and map that to hardware with varying numbers of processors per machine. This is an NP hard bin-packing problem; it's computationally very expensive to calculate an optimal solution, but fairly simple to approximate a valid solution.
This is easy when the number of processes map one to one to the number of processors, but that is no longer the case with OpenMP. Now our tasks have multiple threads and performance can suffer if a task is split between CPU sockets.
What this means is a checkbox for the user, and another layer of calculation on my end, all in the name of simplicity!