G
enerative artificial intelligence (GenAI) is a game changer, according to most experts within the language technology space. But just how much of a game changer is it, and how can it be utilized to benefit both businesses and individuals?
That’s the emphasis of a new study by RWS called Genuine Intelligence: Innovate Like You Mean It. In the midst of a tech-induced fog where differing information and prognostications can create a bewildering environment for organizations simply looking to future-proof their businesses, Innovate Like You Mean It cuts through the noise by providing some cold, hard numbers about what members of the C suite are feeling vis-à-vis AI.
So, what are executives feeling? Well, it’s all in the study’s key findings. Respondents reported that:
- 87% are feeling under pressure to implement AI solutions.
- 50% expect GenAI to have a significant impact on their business in the next year.
- 43% believe GenAI is already critical to retaining their competitive advantage.
- 53% are concerned by the accuracy and reliability of GenAI outputs.
- 58% are concerned by the potential impact of GenAI on their Environmental, Social, and Governmental (ESG) goals.
One thing’s for sure: No matter what people think about AI, it’s a conversation that needs to be explored. As RWS has analyzed in previous surveys, business leaders are feeling real pressure to implement AI at speed and scale, but there are flies in the ointment — including the cost of shortcuts on due diligence, risk management, safety, security, and trust and the undeniable performance problems of new GenAI applications.
“By taking a purposeful and responsible approach to innovation, we can all play our part in making sure the current wave of excitement around GenAI’s potential doesn’t lead to disillusionment, backlash, or even a new AI winter, but instead leads to positive progress for business and society,” the RWS report states.
To that end, RWS surveyed 200 executive respondents, half from America and half from the United Kingdom (UK). CEOs accounted for 63% of respondents, while 37% were CTOs, CIOs, or COOs. Furthermore, 60% of respondent organizations employ over 1,000 people, with 37% from the tech sector, 25% from regulated industries, and 20% from manufacturing.
Their findings both inform and validate the stance RWS has taken when advising clients on AI deployment. While there are still many pages yet to be written in the history of AI, company experts believe that embracing Genuine Intelligence — a fusion of human and machine intelligence -— is the path forward. And it’s exactly that philosophy that informed the creation of their linguistic AI solution, Evolve.
Responsible Investments
Remember that rush when the AI hype train first pulled into the station? Much of that has not yet dissipated despite the obstacles faced by the technology, with 85% of the respondents interested, 76% excited, and 68% informed. Meanwhile, 17% are cautious, while another 17% are concerned.
Likewise, 42% of UK respondents and 58% of US respondents believe AI will have significant impacts on their business within 12 months. Those numbers jump respectively to 64/63% within one to two years and 78/75% within three to five years. Those escalating numbers are in line with expert analysis forecasting an extended runway for AI’s full potential.
“The Gartner Hype Cycle for AI suggests GenAI is expected to take five to ten years to move from its current position at the peak of inflated expectations to reach the plateau of productivity,” the RWS report states. “Equally, Goldman Sachs doesn’t expect to see serious impacts from GenAI before 2027.”
“The benefits may not be as immediate or as transformative as the headlines suggest, but the long-term positive impacts on customer experience, employee productivity, and business efficiency will be all the more significant, providing solutions that are accurate, compliant, reliable, and secure — enhancing rather than damaging brand and organizational reputation,” the report later adds.
With massive amounts of capital at risk in the event of bad decisions, RWS recommends the 70:20:10 innovation model split for your resources. Under the model, 70% of capital is dedicated to exploring immediate, incremental, or core innovations; while 20% fuels adjacent, medium-term innovation; and 10% funds transformative, long-term projects.
Ethical Deployment
That’s just one step to responsible AI implementation. Ethical advisory boards and AI councils are also invaluable in cutting through the noise surrounding AI, and according to RWS’ research, 90% of organizations have done exactly that. These councils and company leadership itself should be guided by four pillars of responsible AI implementation:
- Responsible innovation: building trust in the future through collective stewardship of science, product, and service innovation in the present.
- Digital ethics: exhibited when interlinked systems reinforce the work that is performed in service to responsible innovation to satisfy values — in accordance with principles, and in support of governance.
- Data ethics: seeks to cure automated decision-support systems from the negative consequences of data quality issues, with a systemic approach to integrity and provenance throughout the data supply chain.
- Responsible AI: concerned with the fairness, bias, and efficacy of decision-support systems. AI ethics can often treat symptoms of injustice that can emerge from automated systems.
Managing Expectations
As some investors observe a cooling-off period following ChatGPT’s massively hyped launch, it’s easy to forget this isn’t the first time AI enthusiasm ebbed and flowed. Indeed, these fluctuations began as early as the 1960s with machine translation’s earliest experimentation.
That said, even casual observers would likely say there’s something different about the current moment. In Innovate Like You Mean It, RWS experts posit that ChatGPT’s chat interface is the distinguishing factor.
“You could call it the iPhone moment for AI,” the report states. “There were smartphones before the iPhone, but they were yet to capture consumer attention. The Blackberry, for example, was a device for business. With the launch of the iPhone, mobile internet and mobile applications were available to the masses in a revolutionary new device.”
With all that context in mind, it’s still important to manage expectations about AI capabilities, both current and future. And a major factor in that need is preserving consumer trust. Overpromising and under-delivering is liable to obscure the genuinely useful aspects of AI, and bridging the AI gap — the gulf in understanding between expectations and reality — is an important step in maintaining trust. And, as RWS experts point out, it goes further than that.
“It’s not just Big Tech under the spotlight,” reads Innovate Like You Mean It. “There’s also growing awareness, especially within the burgeoning field of data ethics, that many automated decision systems are not fit for purpose, are discriminating against individuals and communities, and providing little transparency or access to redress.”
“The risks involved in launching immature large language model-powered chatbots with inadequate guardrails and oversight have never been greater,” the report continues. “Business reputations can be seriously undermined at the first bad customer experience.”
RWS urges businesses to embrace a transparent approach to their AI deployment. Consumers need to know when they’re interacting with AI, and at every pressure point in the business process, human stakeholders should be overseeing operations and establishing safeguards. That’s the approach favored by survey respondents, too, with 75% of UK respondents and 53% of US respondents reporting a great deal of stakeholder involvement in GenAI use cases.
Creating Genuine Intelligence
For RWS, the AI debate is too defined by an all-or-nothing attitude. The company’s experts see great value in pairing automated technologies with human expertise and creativity both in the not-too-distant future and the current day. There’s amazing ground to be gained in optimizing relationships between human and computer labor this very week.
As Innovate Like You Mean It points out, there’s also value in imagining beyond how AI will change society and contemplating how it will change us.
“As AI penetrates deeper into people’s daily lives, computer scientist Jaron Lanier comments, ‘the most important thing about technology is how it changes people.’ Research shows that AI technologies may have a negative impact on our sense of personal responsibility — an extension of the ‘computer says no’ problem affecting automated decision systems.”
An AI system that enhances, not diminishes, human potential is key to the RWS AI vision. And that’s exactly how they’ve shaped Evolve, the company’s “first innovation that fully brings to life [their] vision for Genuine Intelligence.” It’s a system built for what AI currently is while leaving room to expand on what it might become.
It includes an authentic two-way exchange. Human input enriches the machine outputs, while the machine outputs inform and target human input as efficiently as possible. It’s a self-improving system: the more you use it, the better it gets.
One thing is certain: AI isn’t going anywhere. Its existing and potential applications, plus the volume of investment, all but guarantee it will work itself into our lives one way or another. The task, then, is defining an AI approach that encourages both commercial and human flourishing. And RWS believes it has that approach in its Genuine Intelligence proposition.
“Consumers are conflicted by the technology — excited by its potential but concerned about how it will be deployed by business,” Innovate Like You Mean It concludes. “Addressing this tension requires organizations to a take a responsible, long-term approach, with human oversight providing a key element of assurance.”