Together, not against each other: How can we work in parallel with AI to move forward together?

At a time when artificial intelligence (AI) is grabbing headlines for promising to replace human jobs, it's worth shifting our focus. Beyond the fantasy of "robots coming for our jobs," we face a much more interesting challenge: How can we truly work with AI (and not against it) to multiply what we do? As a biostatistician who has shifted towards specializing in AI applied to science, automation, and workflows for researchers, I offer you an honest, practical, and strategic perspective, designed for scientists, healthcare professionals, engineers, computer scientists… and really for anyone who wants to take advantage of this era (and not be left behind).


Why does it no longer make sense to talk only about “AI replacing jobs”?

For years, the argument has been repeated that AI will eliminate jobs: faster machines, without human error, that will analyze data, make decisions, etc. But the evidence paints a different picture:

  • Research from the MIT Center for Collective Intelligence indicates that A human-AI combination does not necessarily outperform the best human alone or the best AI system aloneIn other words: “a human alongside AI” is no guarantee of automatic improvement (MIT Sloan).
  • The Forbes Technology Council is already talking about the shift from “automation” to “augmentation” (automation → augmentation): AI is no longer just about cutting human tasks, but is now serving as a co-pilot that enhances our capabilities (Forbes).
  • In real-world work environments, the division between what AI does and what humans continue to do is key. But many organizations still struggle to clearly define this “line of separation” (Axios and World Forum).

In short: the challenge shifts from “how does AI replace me?” to “how do I work?” with AI so that we can both be better?”


What does each side (human and machine) contribute, and where is the coalition?

What AI does well

  • To process large volumes of data quickly and often more consistently than a human.
  • Finding recurring patterns, generating predictions based on statistics, executing tasks that follow fixed rules.
  • Automate workflows that previously consumed a lot of time (e.g., data transformation, pre-processing, standard reports).

What remains human domain

  • Interpreting new, ambiguous, or poorly defined contexts.
  • Contribute judgment, values, ethics, intuition, creativity.
  • Understanding when AI makes mistakes or runs wild (biases, “hallucinations”, generalization errors).

The collaboration zone (the “jump”)

Where human + AI together achieve more than either one separately:

  • The AI suggests options, the human validates or adjusts.
  • AI generates drafts or hypotheses, the human refines and provides domain context.
  • AI automates the heavy lifting, humans focus on what adds value: asking questions, making decisions, innovating.
  • A flow is designed where both “understand each other”: the AI must be transparent enough for the human to trust and correct (Simple Science and arXiv).

Five challenges you should know about (and overcome) right now

Working in parallel with AI isn't plug-and-play. Here are five real obstacles you should be prepared for:

  1. Trust and explainability
    AI can do amazing things, but if humans don't understand why, trust erodes. Research on human-AI teaming identifies the need for AI to be "responsive, situationally aware, and flexible in decision-making" to function as a team (Frontiers).
  2. Unclear division of roles
    Who does what? What tasks does AI leave? What is the human oversight? When does a human intervene and when does the machine? A poorly defined workflow generates errors, duplication, or gray areas (World Forum and SpringerLink).
  3. Mutual learning and adaptation
    AI doesn't learn the same way a human does; humans, for their part, must adapt to a new partner (the AI). Understanding the differences (for example, how humans generalize vs. how AI does) is essential for smooth collaboration.Simple Science).
  4. Change in professional skills
    If your value used to be "I do X better than a system," now your value might lie in "I manage, supervise, and complement an AI system that does X." It's a change in profile: more strategy, less routine operations.
  5. Ethics, biases, and human oversight
    Machines replicate biases, make mistakes that a human can detect, and there are ethical implications in tasks where AI is involved. Humans must remain active in the process, not just as final reviewers.

Practical strategy for professionals - how to get started today?

If you work in biostatistics, science, clinical practice, engineering, computer science, or generally in a technical environment, here's a roadmap to get you started. in parallel with AI:

Step 1: Map your workflows

– Identify the repetitive, low-value-added tasks that steal your time.
– Identify the decisions that require judgment, creativity, and interpretation.
For example: in data science, perhaps the pre-processing of hundreds of experiments can be done by an automated pipeline; but the question of what hypothesis to formulate, how to interpret the results, that remains a human question.

Step 2: Introduce AI as a co-pilot

– Automate the mechanical steps (always with supervision).
– Use AI models to generate drafts or suggestions (for example: generate base reports, extract patterns, perform automatic visualizations).
– Design a human step of “verification / adjustment / interpretation”.

Step 3: Clearly define the human-AI roles

– When does AI act without human intervention (and with what supervision)?
– When does the human take over?
– What criteria define that AI is failing or not reliable enough?
It is important that the human has the final say (or at least a feedback mechanism).

Step 4: Build a “human-AI team” culture and acquire skills

– Train yourself to understand what AI can do, including its limitations.
– Acquire AI review skills: detect errors, biases, evaluate outputs.
– Develop a collaborative mindset: AI is not an adversary, it is an ally.
– Monitors the performance of the human-AI system: not only if the AI “produces more”, but if Collaboration produces better overall results (fewer errors, more innovation, better scientific value).

Step 5: Continuous improvement and adaptation

– Like any human team, the human-AI team needs feedback and improvement.
– Collect metrics: time saved, number of errors, level of human supervision, degree of user satisfaction.
– Adjust the flows as the system becomes more mature.
– Consider ethics and governance: ensure that AI operates within a responsible framework.


Why does this strategy matter now?

Because we are at a specific juncture, and not just because it's fashionable:

  • More and more organizations (from healthcare to industry) are adopting human-AI collaboration models: the “human + AI” duality is becoming a paradigm. For example, the consulting firm Tata Consultancy Services defines it as a “civilizational model” where AI agents work alongside humans (The Times of India).
  • Science and research (the field in which you operate) have a critical need for automation: growing datasets, the need for reproducibility, and pressure to deliver results. AI can be a multiplier.
  • But society is also focusing on AI not just being a replacement tool, but a human empowermentAt the recent TIME100 impact dinner, it was emphasized that “AI should serve humans, not replace them” (TIME).
  • Finally, from a professional point of view, this is an opportunity: whoever is capable of working with AI (and not against it) will have clear competitive advantages: more efficiency, more innovation, more added value.

Some final recommendations (and warnings)

  • Don't expect AI to "do everything" right away. Collaboration is gradual. Start small, with specific tasks, and grow from there.
  • Maintain transparency: ensure that you, as a human, know what the AI is doing, how it's doing it, and when it might make mistakes. This increases trust and reduces risks.
  • Do not abandon human skills: judgment, ethics, creativity, and domain knowledge will continue to play an essential role.
  • He believes that not all jobs are “automatically collaborative” with AI: some processes may be complex, ambiguous or unstructured (there AI is hardly irreplaceable, and collaboration may require more design).
  • Remember that the human-AI partnership is not "AI does everything, human only supervises." It is more like "human and AI have complementary roles and work together in a collaborative flow." When roles are unclear, the results worsen.SpringerLink).

In summary

Working in parallel with AI is not a fad, it's a future strategyAnd as professionals in science, technology, health, or engineering, we are not destined to be replaced or simply passive supervisors of machines: we can be the architects of that collaboration.

If you're interested, as a biostatistician now specializing in AI for scientists and automation, this is your specialty: designing workflows where data, machine, and researcher converge. That triad is what will make the difference.

The key message for anyone reading this: AI won't replace you if you use it properly, but it could leave you behind if you don't adapt to collaborating with it..

Leave a Reply

Your email address will not be published. Required fields are marked *

en_US