The aim is to run one of these AI roundtable events each quarter so do have a look at joining the AI Lab if you want to hear about them first: https://lnkd.in/epH93FNk
Take outs from the roundtable discussion
The STRAT7 AI Lab held their 2nd Gen AI roundtable this month – focused on ethics and confidence – and how it affects adoption. We spent 2 hours unpacking this fast moving area, steered by the excellent Sarah Askew and Ross Denton (and without a single PPT slide in sight!). Joining us in the room were 7 client-side insights professionals as well as trusted partners working in this arena to lend their perspective.
Here are 5 take outs from the session:
- Single source of truth: concern that AI generated outputs (e.g. deep research reports) can now create multiple versions of the truth, or even be coerced to validate a pre-conceived idea – all at a click of a few buttons. Research practitioners jobs will need to evolve to put these outputs in context and package them up in ways that enable decision making, not inertia.
- EQ of AI: Healthy scepticism in the room about the gap between Generative AI outputs and what it fundamentally understands about human behaviour. AI is extremely good at summarising and generating content at very low cost, but it can easily miss the ‘Why?’ behind certain behaviours (e.g. body language and tone of voice in qual research) and has low emotional quotient (EQ).
- Heavy lifting vs heavy thinking: A consensus that AI should be employed for lots of ‘heavy lifting’ tasks across the market research value chain. But it cannot (and should not) replace ‘heavy thinking’ from experts who can still connect the dots and package up research better than any agentic AI tools currently on the market.
- Unbundling of tasks: AI is starting to unbundle tasks that previously delineated roles in market research. Those who get ahead and re-define their roles will start to become ‘architects’. Able to link together tools and workflows with expert prompting. More change management training will be required to help empower people to use these new tools, to produce outputs with the same level of quality as before.
- Bias and representation: A feeling that a core blind spot of LLMs is it generalises from an internet that is ‘WEIRD’ (read more on this here: https://lnkd.in/eT2B4Cy2). It is up to researchers to push back on this innate characteristic by carefully considering what tools to use and risk-assessing how AI can exclude and marginalise people who do not live in English speaking, Western countries.
At Incite we are actively helping our clients understand what this new age of AI means for them personally; as organisations; as brands – and what it means for their customers. It’s no longer about when you adopt, but how and why. We are unpicking those challenges to ensure AI is elevating insights and helping to make our clients stronger.