AI Insider: Human-AI teaming

02 Feb 2026
AI-generated image of human hand shaking robot hand through computer screen

With a research focus of human-centred computing, Sabrina Caldwell believes that whatever our systems do, they should support humans.

Sabrina Caldwell, Senior Lecturer at UNSW Canberra in ethics and technology, has an interdisciplinary role across both the School of Systems and Computing and the School of Humanities and Social Sciences. She says it’s really important for us to not only understand the technology behind AI, but also its transformative nature and impact on society. 

Human-AI teaming transforms the nature of work and society, because in this type of teaming, humans can work in collaborative relationships with AI, where AI agents take on team member roles that use their specific capabilities to help the human-AI team achieve better outcomes.

Sabrina suggests four key perspectives to consider with human-AI teaming:

  1. the nature of human-AI interaction in and of itself
  2. AI embodiment – is the AI agent embedded in a robot, is it a virtual presence like an avatar, or is it multi-embodied, where one AI agent can occupy different kinds of personas and representations?
  3. the impact of AI on the future of work
  4. the process of getting from here to there – the pathways and processes of how we build these things.

Process and context are key, Sabrina says. 

“At the moment people find data, or a pattern in data, and they then start using it to achieve some goal without giving much consideration to any sensitivities. Amazon’s 2018 failed attempt at using AI to filter job applicants – discriminatorily it rejected female candidates because its historical training data was heavily skewed towards males – is an example of what can happen if you don’t think through the context within which the AI operates.”

Human-AI interaction

Trust is a strong focus of Sabrina’s research and in the case of human-AI teaming there are unusual circumstances, she says. 

“When two people in a high-performing team are partnering, their skills, strengths and weaknesses are well-known to each other, while in a human-AI situation, AI may have vastly superior skills in some areas and yet be functionally stupid in others,” she says.

“Further, human paradigms like give and take do not translate well to AI, and the malleability of AI – for example outputs that change based on parameter inputs – does not translate well to human understanding of relationships.

“A key element of human-AI teaming is bi-directionality. People often think about whether a human can trust an AI agent and work with them, but they don’t often think about whether the AI agent can trust the human they are working with. An extreme example of this is in a military context, in high-risk or dangerous circumstances, where an AI agent is depending on a human who has been physically or mentally compromised in conflict.”

The future of work

With colleagues, Sabrina has been investigating facets of the impact of AI on the workforce.

“Right now, we are largely in the substitution aspect of AI implementation. That means if AI can do something better and/or more economically than a human can – administration, bookkeeping, some medical diagnoses, basic customer service activities – then it is likely for a company to have AI do it instead of a human employee,” she says.

“The next step is augmentation or human-AI teaming – which means a human agent and an AI agent can work together and their strengths complement each other. When there’s substitution and no augmentation, without human oversight, you can run into problems. For example, recent research by Harvard Business School noted that some companion AI chatbots exhibit manipulative behaviour such as ‘guilt-tripping’ to deter their users from ending a conversation.”

Augmentation is followed by a range of transformations, in which AI could take on roles that don’t currently exist in industry, or the creation of new industry work structures and alignments taking advantage of AI capabilities. 

“This is a completely different paradigm and could be helpful in the long run as we transform the kind of work we do in a particular context.  We have a long way to go as we move through these steps, and how we travel this road will have an impact on where we arrive, be it harmonious and human-life enhancing or problematic,” Sabrina says.

AI embodiment

An AI agent in a human-AI team is comprised of two parts – the visual or physical (in the case of robots) representation, and the systems that form the AI agent processing, such as data bases in the cloud, operating systems and ways of producing results and answering questions.

Many people can’t tell whether some information or media has been generated by a human or by AI but, for the public, it’s important to understand – somehow or other – when you are engaging with AI. 

Human-AI interaction might be, at its simplest, a chat bot online and at its most complex an Android robot, or anything in between.

Sabrina says that sci-fi futures posited in movies such as Asimov’s I, Robot paint an unrealistic picture. Humanoid robots would be “prohibitively expensive in terms of power, computing capability, materials and networking to create and distribute to the masses”. 

A virtual AI agent in a human-AI team is likely to be the most common form of AI embodiment because it’s easiest to construct and deploy, Sabrina says. It’s created using computer graphics – currently 2D but there may be 3D in the form of holograms in the future – to develop an AI agent that is a recognisable character with which to engage.

The journey from here to there 

In one of her papers, Sabrina proposes an agile approach to the process of human-AI teaming development. In the first instance, you figure out the minimum viable product (MPV) to go from having no human-AI interaction to having some, building on the product and taking into account lessons learnt along the way.

“First, the human needs to understand the situation the AI is operating in and AI needs to understand the situation where the teamwork is happening. This is called joint situational awareness. Then, communication is needed. The human needs to understand what the AI agent knows and is positing, and the AI agent needs to understand the human’s concerns.”

Sabrina says the point of having human-AI teaming is to take advantage of the strengths of each. The AI agent assesses factors and answers within a problem’s defined context and the human assesses the AI response via distributed decision-making. Once a decision is reached through this distributed decision-making, either the AI or the human (or both) could then undertake the operational activity.

Finally, how do you take all of these steps and implement them in the real world where they can be monitored and executed safely – and how do we trust what’s been created?

Sabrina’s research focus of trust is definitely a conversation for another day.

 

Image:  Sourced using AI: ChatGPT with the prompts “hi resolution landscape oriented image of a female human hand shaking hands with an artificial hand extruded from a computer monitor”.

Comments