Dispelling AI Job Fears
Posted by Jen García on
Ed. note: Throughout the past two years, the Goldhirsh Foundation and its LA2050 initiative have provided complimentary AI (artificial intelligence) consulting, training, and workshops to the Los Angeles impact community and beyond. The work is led by Jen García, Goldhirsh Foundation’s AI EIR (executive-in-residence). Below – and in other posts available in the AI section of our news feed – García shares practical insights, real-world use cases, and emerging research to support nonprofits’ responsible and ethical adoption of AI tools.
You've heard the threat: "AI is coming for your job."
Those of us in the social impact sector may not experience the same fear that some others in the corporate world feel when they hear this, because we understand the human involvement needed to provide solutions for the various causes we serve.
But for anyone who is worried, allow me to reassure you of a few things.
1) AI will take tasks, not jobs. Andrew Ng, the co-founder of Google Brain, founder of Deep Learning AI, reminds us that jobs are a series of tasks. Some of these tasks, such as data analytics, AI can do better than we can, but not at the fear-mongering rate we’ve been sold. Many of us, in our trial-and-error adoption of AI, have discovered as much. But we've also discovered that automating research with the help of a large language model saves hours of rabbit-hole-digging for grants, best practices, or other information. And, when used as a thought partner, AI tools open up opportunities for informed decision-making and higher level thinking.
2) AI is not as advanced as we may think.Ivan Zhao, founder of the workspace Notion, wrote an article on Notion’s blog last month called “Steam, Steel, and Infinite Minds,” wherein he ponders historical metaphors for AI implementation. In it, he describes two major issues that AI (particularly agents, which are software that use large language models with access to tools, memory, and data) has that you may have already experienced as bottlenecks: 1) what you want to connect to AI tools live in multiple applications, local desktop files, and your brain — most of which AI does not have access to, and 2) a human-in-the-loop is needed to verify the quality and veracity of the LLM’s output.
3) “Human in the loop” continues to be critical to AI.
As we begin to test which tasks applying AI is most useful for, perhaps we should ask, Do I want AI tools to take this task from me? If so, why? At some point, if the answer is yes to every task in your job, then, well, maybe AI could take your job. Be sure to remember your agency and the importance of you, the wise expert with actual experience, as a human in each of the tasks you complete, even if it means just verifying it has been done correctly.
What remains true in the sector is 1) the only constant is change, and 2) to remain a valuable contributor at work, we must continually learn. This has been the case from the beginning of time, and will remain so no matter how AI and other technology evolves. And as AI adoption increases, being familiar with the effective uses of AI will be expected by employers. Today, you are at the early stage of AI adoption: if you learn where AI amplifies human work and teach others, your value will increase.
All this means that we are still needed, now more than ever, and we have time to learn and continue to grow. As we experiment with these AI tools, we should focus on learning when to use them, and how to continually augment our own cognitive abilities.