
No data team? No problem.
Download our free guide to learn how to handle overwhelming data requests, practice your communication skills, and prioritize your professional development as your nonprofit's data team of one.

In the third workshop of the Data Coaching program, we talk about data analysis techniques and pitfalls. Of course, since AI is such a hot topic, we also discuss the ethical and environmental implications of using AI as a data analysis tool. But last week, one of the participants asked a great follow-up question that I hadn’t given much thought to:
If a staff person is researching a particular topic – housing, homelessness, education, food insecurity, etc. – is there a difference between using Google versus Claude, ChatGPT, or a similar AI chat box?
On the surface, there doesn’t seem to be much of a difference, especially now that Google automatically gives you an AI summary based on your search terms. However, there are important distinctions nonprofits should be aware of when conducting background research for policy, advocacy, needs assessments, and other reports.
First, search engines and AI work differently. A traditional search engine, like Google, works by trying to match keywords in your search query to different websites, social media posts, videos, and images that it has already collected in an index. Once Google finds and returns the matches, it’s up to you to view the information and find the answers to your questions.
Generative AI, on the other hand, is in the business of trying to understand meaning and creating content for you to answer your questions. AI essentially takes your questions or search queries and generates the answers for you. This might work well if, say, you already have a group of reports and other documents you would like to summarize for easier comprehension. However, if we’re asking AI to do the work for us, there are several things we need to be aware of:
This is not to say that there are no issues with traditional search engines. First, Google has also been accused of having biases in its results pages, though the research is limited and mixed. Biased results may have more to do with how Google makes guesses about what you want to hear based on the keywords we use to conduct searches. Also, the existence of Google Ads means anyone can pay to boost their content to the top of the search results. It doesn’t mean that the sponsored result is the single best match for your search; it just means that someone paid for that piece of content to reach the top of your list.
From my perspective, there are more risks in using AI as a search engine because it typically does not cite its sources, and the processes behind how AI generates content are still opaque to the average person. However, regardless of which technology we choose to use, it is up to us to ensure that the sources we use in our reports – internal and external – are based on reputable information from reputable sources. Examples of reputable sources include peer-reviewed research, grey literature published by “government, business, or academic organizations”, and trade literature aimed at professionals in specific fields.
There’s no reason to risk the reputation of your organization on shortcuts and bad information. Doing our due diligence is the best defense against potential problems brought about by imperfect technology still working out its kinks.

In the third workshop of the Data Coaching program, we talk about data analysis techniques and pitfalls. Of course, since AI is such a hot topic, we also discuss the ethical and environmental implications of using AI as a data analysis tool. But last week, one of the participants asked a great follow-up question that I hadn’t given much thought to:
If a staff person is researching a particular topic – housing, homelessness, education, food insecurity, etc. – is there a difference between using Google versus Claude, ChatGPT, or a similar AI chat box?
On the surface, there doesn’t seem to be much of a difference, especially now that Google automatically gives you an AI summary based on your search terms. However, there are important distinctions nonprofits should be aware of when conducting background research for policy, advocacy, needs assessments, and other reports.
First, search engines and AI work differently. A traditional search engine, like Google, works by trying to match keywords in your search query to different websites, social media posts, videos, and images that it has already collected in an index. Once Google finds and returns the matches, it’s up to you to view the information and find the answers to your questions.
Generative AI, on the other hand, is in the business of trying to understand meaning and creating content for you to answer your questions. AI essentially takes your questions or search queries and generates the answers for you. This might work well if, say, you already have a group of reports and other documents you would like to summarize for easier comprehension. However, if we’re asking AI to do the work for us, there are several things we need to be aware of:
This is not to say that there are no issues with traditional search engines. First, Google has also been accused of having biases in its results pages, though the research is limited and mixed. Biased results may have more to do with how Google makes guesses about what you want to hear based on the keywords we use to conduct searches. Also, the existence of Google Ads means anyone can pay to boost their content to the top of the search results. It doesn’t mean that the sponsored result is the single best match for your search; it just means that someone paid for that piece of content to reach the top of your list.
From my perspective, there are more risks in using AI as a search engine because it typically does not cite its sources, and the processes behind how AI generates content are still opaque to the average person. However, regardless of which technology we choose to use, it is up to us to ensure that the sources we use in our reports – internal and external – are based on reputable information from reputable sources. Examples of reputable sources include peer-reviewed research, grey literature published by “government, business, or academic organizations”, and trade literature aimed at professionals in specific fields.
There’s no reason to risk the reputation of your organization on shortcuts and bad information. Doing our due diligence is the best defense against potential problems brought about by imperfect technology still working out its kinks.

Take our free Data Audit Checklist quiz to evaluate your current data practices and discover immediate improvement areas.
Take The Quiz