Understanding the Disruptive Nature of Generative AI
Generative AI will change how work gets done. The challenge is to rethink guidelines for employee behavior.
- By Rob Enderle
- July 24, 2023
A fascinating short video from Wharton University’s Ethan Mollick showcases that no matter how disruptive you thought generative AI is, it is likely far more disruptive than you think. In a few minutes, largely using a series of generative AI demonstrations, the video posits that generative AI is potentially far more disruptive than the Industrial Revolution initially was (a 25% improvement in productivity versus up to 80% improvement for generative AI).
Generative AI is going to significantly impact how we assess employees and how we authorize the tools employees are using. It challenges us to rethink ethical employee behavior.
Employee Assessment and Disruption
As Mollick points out in his video, once an employee learns how to use generative AI, they experience between a 30% and 80% performance improvement. Depending on the tool, quality may initially suffer, but the impact on performance is pronounced. Some employees can do work that would typically take them a day to accomplish in less than an hour, which may be a reason for some managers to ban remote work.
With generative AI, employees are reporting they are happier, able to focus more on the things they want to do, and that they are better able to accomplish their goals while improving their work/life balance. This all suggests that any resulting generative AI policies should not focus on the time it takes to accomplish a task but more on the quality of the result, and employees should be encouraged to both use approved AI tools and disclose that they are using them.
The “approved tools” part is so that IT can ensure that the tools aren’t hostile (they don’t contain malware or expose other security risks), and the disclosure requirement ensures that the employee isn’t ethically compromising themselves and encourages other employees to also use the approved tools. You don’t want employees to think cheating is a path they can or should take, and people don’t like change, so getting employees who are set in their ways to use new tools can be difficult.
If quality is maintained, it shouldn’t matter to the firm if the job is done in an hour or a day, but care must be taken to ensure quality. Generative AI tools aren’t perfect and can (and do) make mistakes. Even with extra quality control steps, these tools are huge productivity boosters.
Initially, it might even make sense to hold up employees who are most effective at using generative AI tools as examples to others to further promote the ethical use of AI and the disclosure of that use.
Approving the Tools
AI tools are proliferating quickly, and they are increasingly being used for things such as coding by people who don’t know how to write code. That can be a problem if the tool is compromised and embeds malware into the code it writes or automatically creates backdoors and reports them in otherwise secure applications.
That’s why IT still needs to approve the AI tools the employee uses and the firm should fund those tools if they are appropriate for the employee’s job. This helps make the user’s behavior -- using the tools in the first place -- acceptable while putting guardrails on that use to better ensure quality and protect against a hostile or unreliable tool.
Generative AI can look like cheating to some managers. I had a manager who used to lecture me when I used automated tools that work shouldn’t be fun, it should be work, and he was against anything that he didn’t use when he did my job. He was a smart guy, but that was an incredibly stupid thing to say given that much of our productive present is based on automation and has been for decades.
Having a machine do 30% to 80% of your job sounds like cheating, but it isn’t. The 20% or more that AI can’t yet do tends to be the thinking part and that’s where an employee should shine. Forcing an employee to manually do what an inexpensive tool could do automatically is not only counterintuitive, but if done broadly in a competitive market where other firms are embracing AI, it could make the firm uncompetitive.
Using AI isn’t cheating any more than using a calculator or word processor was. The ethical restrictions to using AI should be moderated to ensure its ethical and productive use.
Mollick’s powerful and convincing presentation explains how we are still underestimating the impact of generative AI and what it can do. Ensuring that employees can use tools in this class safely, effectively, and without unreasonable risk will go a long way to ensuring the competitiveness of any enterprise.
Adjusting employee measurements to focus on quality over quantity, setting policies about approved AI tools, and making sure the ethical guidelines take into account these tools and encourage their use are all critical to the successful adoption of this powerful, and rapidly improving, product set.
As it was with prior technology innovations, generative AI use will redefine the competitive landscape. Making sure you survive this change will have a lot to do with how you properly implement generative AI technology.
Rob Enderle is the president and principal analyst at the Enderle Group, where he provides regional and global companies with guidance on how to create a credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero-dollar marketing. You can reach the author via email.