< Priority Support

Humans are Underrated
Human and Organizational Performance (H&OP)


"Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated."
--Elon Musk
"Tweet by @elonmusk Apr 13, 2018"

Human & Organizational Performance (H&OP) is about building organizations that can tolerate inevitable human errors in such a way that no serious injuries or fatalities (SIFs) occur in the workforce nor the organization's customers. But what about Organizations that are exploring the augmentation of their human workforces with general purpose robots that have artificial intelligence (Embodied AI)?

Embodied AI (robots w/AI)
Overview
There has been significant progress in AI by coupling deep learning with newly created datasets that contain massive amounts of data and are freely available to train AI systems.(e.g., ImageNet for machine learning of images). This progress has inspired researchers to develop what they call "Embodied AI" or an embodied agent. An embodied agent is a robot with AI that learns, by exploring and interacting with its environment. The robot is creatively challenged to execute tasks within its environment.
Ref: "Embodied AI.org" Embodied AI 2021 Workshop

Reinforcement Learning (RL)
Reinforcement Learning (RL) is a branch of machine learning that uses Actions, States and Rewards. An RL Agent is given a set of actions that it can apply to its environment in order to reach a goal. In the video above the actions are moving the squares on the Rubik's cube in order to "solve" the cube which in this case can only be a "one and only" state of uniform color on each side of the cube. Here is where the reinforcement comes in: The RL agent earns rewards based on the how close each state is to the one and only solution state. As the robot executes actions the state of the cube is changed to being either closer or further away from the one and only cube solution state.

RL agents like the one in the video start by knowing nothing about thier environment. They just have a pre-defined set of actions which the RL agent uses to "learn" as it gradually receives "feedback" about its environment. The RL agent learns a set of action sequences that maximize the rewards it gets. RL agents are used in content recommendations and autonomous vehicles.
Ref: How Reinforcement Learning Chooses the Ads-You-See by Ben Dickson, Tech Talks, February 22, 2021

Physical Learning is Too Slow
Natural intelligence is always biological. Compared to a natural intelligent system RL agents are crude but they work in a similar way to natural intelligence because RL agents are a form of behavior based intelligence. These agents start with nothing. There is no algorithm. Instead these embodied agents use behavior based intelligence along with their sensors and actuators to learn by physically exploring and adapting to their world. Learning and exploring their new unknown environment(s) is not enough to make robots useful. The greater challenge is for Embodied AI to also be able to change its environment, reason about tool usage, and do hierarchical planning to execute long-horizon task. None of the current AI systems are able to meet this challenge today. That is a tall order for Embodied AI.

To get there, physical learning is too slow it would take a thousand years to physically train the Embodied AI robot in the video above to explore and change its environment in order for it to solve the Rubik's cube. Enter simulated environments. Computer scientists code simulated environments and worlds to train the RL agents. Todays computing power enables an RL agent to run thousands and thousands of virtual simulations much faster than real time. In these simulations a embodied agent and is virtually spawned into a simulated environment where it explores and learns to change its environment by being challenged with a task like moving a box from one room to another.
"Transport Challenge with Three-D World" CVPR 2021 Embodied AI Workshop Hosted by MIT & IBM

AI Gets Massive Amounts of Training via Simulation
AI needs massive amounts of data for training. This is a big, big, important point. In the video above, the robot hand's RL agent that is solving the Rubik's cube has received simulation equivalent to thousands of years of learning. Listen to the video carefully, the robot needed thousands of years of simulation because it had to learn from scratch. Unlike a human that comes with motor control and planning intelligence at birth the AI embodied in this robot starts with zero. There is no algorithm only "learning" just like human does by trial and error. Tasks that a child can do without any training are very complicated for current AI to master.

But that doesn't mean that Organizations should cut back on training, it means the opposite, Organizations should be spending more on technical training, a lot more. If Organizations spent more on technical knowledge training and simulation for their workforces there would be fewer serious injuries fatalities (SIFs) and higher productivity. Why? Because when a person or team is working near a hazard (even if they have done so many times), the person or the team cannot properly assess risk if they do not have enough technical training. Without the technical training, simulation and re-training the person or the team will not fully understand why the hazard is dangerous, nor how to protect themselves from harm. Induced voltage contact is a great example where SIF's that could be eliminated if everyone working on high voltage lines had more technical training and were able to run "what if" simulations to gain more technical experience for managing their grounds in ways that make them better able to assess induced voltage risk before they even start working on the system.

Humans + Embodied AI: Great for Non-Repetitive, Cognitive Work...Perfect for Dangerous Work
Elon Musk wanted to build a production plant that was 100% automated with no humans to build his Tesla cars. Notice that Elon Musk had to start ripping out robots because he "over automated" and it was actually hurting production. So, Elon started investing in training humans instead of robots. Why? --- "Humans are Underrated". Unlike almost every C-Level executive, Elon was able to realize this because he got out of his office and private jet and started sleeping on the plant floor to learn how the work of building a Tesla car actually gets done, NOT the way Elon imagined it was getting done in his mind. Now that's commitment. And for Elon it paid off. He was able to see first hand how truly underrated a human workforce is and give his brilliant engineers what they needed to bring the plant online --- talented humans working on the production floor with the robots. Most CEOs would of have never have slept on the plant floor and their engineers would keep telling them they just need two more weeks to make the robots work and the Organization would go bankrupt because it constantly underinvested in its most valuable asset: a talented, diverse workforce that continuously learns and evolves when given the freedom to grow.

H&OP for Organizations with Embodied AI
From the video and discussions above it should be more clear now that whether using robots with AI or humans they both require training. But robots with AI require a lot more and of a different type. Organizations create hi-fidelity simulators to train robots to optimize and execute repetitive tasks, while they give human to human, instructor led and online technical training for humans to be able to handle non-repetitive, cognitive tasks. If more Organizations were to create or purchase pre-made simulations (e.g. VR) for humans to learn by trying what-if scenarios or learn from interacting with previously catastrophic events then the humans would be much better prepared to protect themselves from hazards, especially when doing dangerous work.

For Organizations thinking of using embodied AI, H&OP remains the same. Its all about building an Organization that that can fail safely when a human or a robot makes a mistake. Learning Organizations know the assets, hazards and the workforce are always evolving. Humans do the non-repetitive, cognitive work while the robots do the repetitive work. Great organizations know that safety and production is not a binary choice and when a built-in hazard cannot be removed nor substituted they look for ways to use robots to protect the humans.

Bonus Section for I/O Psychologists
Welcome, Industrial & Organizational Psychologists. Would you like to meet and maybe even analyze some Reinforced Learning (RL) Agents?
Well, now you can...
+ Learn More