Skip to the content
Digital digest

Throwback to the Web Summit 2023

Posted by:Lucile Gouvernel

Head of Strategy

Last week, we headed to Lisbon to attend the Web Summit. Unsurprisingly, there was a lot of talk about AI. To be fair, the topic dominated the podiums to some extent, to the detriment of the range and diversity of the innovations presented, and the points of view and debates that traditionally make this event so interesting.

One exchange did, however, stand out for me, for its embodied posture, radically at odds with the conventional speeches and forced enthusiasm for AI that all the speakers were visibly expected to display...

Is AI the real deal or all hype?
Meredith Whittaker - President of the Signal Foundation

Admittedly, the future is promising, and the use of different AI models and integrations can lead to real innovations and even disruptions (and a form of progress), for example in healthcare or security... And of course, this raises just as many questions about ethics and regulation, which should be imposed but where the players and decision-making entities seem to be quickly overwhelmed... As Meredith Whittaker , from Signal*, rightly pointed out, if we want to avoid the pitfalls of today's Internet _ with situations of concentration or monopoly, serious questions about net neutrality, enormous egos at the head of platforms that provide information as well as disinformation, and breaches of ethics that everyone recognizes but where it's too late to turn back_... it would be a good idea to invite players other than the tech giants to the table for discussions on the necessary regulation of AI (including the AI Act in Europe, for example).

As an agency, our role in these debates is, in all humility, totally insignificant. On the other hand, setting our own line of conduct and ethics now in relation to the use of the new tools at our disposal should enable us to anticipate future regulations, and avoid a change of model that was imposed, at a forced march, as we saw with the rise of cookie-less or the RGPD (where we had to reinvent performance overnight in the light of the legal constraints imposed).

*If anyone can afford the luxury of a contrarian posture, it's her and not just because she's a female CEO of a major tech player because, as she repeated at the opening of her exchange on the Central Stage, Signal is NON-PROFIT , which de facto guarantees its users a secure, neutral and high-quality service.

If we focus on our marketing and communication professions, and always in connection with AI since this was the tacitly imposed theme of this latest Web Summit, I'd have to say that there was a lot of talk but little concrete action, and therefore a touch of disappointment.  
That's why I've chosen to come back to the keynote "A guide to design transformation" by Dan Gartner - CEO of Code and Theory, which was also quite explosive in the conventional panorama of AI panegyrics... 

Dan Gardner begins by explaining that, like every techno revolution before it, there are 2 spontaneous (and inefficient) ways to react to the advent of an innovation: overreact (and rush) or do nothing. Either way, it's a mistake. It's easy to see why ignoring innovation like an elephant in the room is not a recommended approach... but why, how can we overreact to a technological promise?

This is probably the part of his talk that I found most interesting, when Dan Gardner outlines the 5 pitfalls of a hasty reaction to AI:

  1. Pitfall #1: Considering AI as a pure and simple way to cut costs: focusing on savings where there is precisely an opportunity to innovate is indeed rather illogical, and probably not the path to success... like any other technology before it, AI needs to be tamed, mastered, and consequently requires time invested and expertise to be put in place and proven... so not an automatic reduction in costs.
  2. Pitfall #2: Immediately wanting to present use cases: insisting on being the 1st (to test, to say so, to do so, to communicate...) at the risk of grasping the technology superficially and using it gratuitously and unjustifiably... it will take time to identify convincing cases and use models that offer real added value. It's a safe bet, then, that the first to make such claims are not presenting the most relevant solutions... 
  3. Pitfall #3 Lack of originality in speech and execution: let's face it, "first ideas" are rarely the best (indeed, they're precisely the ones that every good strategist learns to eliminate out of hand, because they're necessarily the ones that come spontaneously to everyone's mind...). ) In a profession where, in the service of brands, we strive for singularity, haste is probably not the best way to identify (test, try out, refine before obtaining) a unique, differentiating solution that produces meaning and value in an application specific to a brand and its context.
  4. Pitfall #4: In other words, if you're not structurally ready to embrace change, if the organization doesn't allow itself the room for maneuver necessary for agility _even chaos to a certain extent, you won't get anywhere. (Dan Gardner rightly points out that digital transformation is a business transformation, and that AI transformation is a digital transformation...) A transformation or a revolution: we need to create the right conditions to welcome this new paradigm.
  5. Pitfall #5: Overlooking bias by ignoring data: preconceived ideas are rife, and at a time when social networks amplify the slightest "point of view" into general rules (for people in a hurry to adopt an opinion, even if it's unfounded), it's tempting to appropriate in a hasty and therefore superficial manner an opinion or even a simple belief, in order to "have something to say on the subject". But without objectivity, without collected and analyzed data, we just have an opinion, vague and hollow, but no feedback. AI is a formidable machine for processing all kinds of data; but it needs to be fed generously before any conclusions can be drawn.

These 5 mistakes highlight what is currently lacking in the use of AI: emotional intelligence, and I would add hindsight. Dan Gardner reminds us that, at this stage, nobody (in our professions, I mean, and we're not MIT researchers) is or can claim to be an AI expert (curious, interested, enthusiasts, beta testers, early adopters, no doubt; but not experts). Humility (and hard work) must therefore be at the heart of an approach in which we are all learners and in the process of building a promising, but still WIP, scheme.