AI chat box vs. Google search: What's the difference?

In the third workshop of the Data Coaching program, we talk about data analysis techniques and pitfalls. Of course, since AI is such a hot topic, we also discuss the ethical and environmental implications of using AI as a data analysis tool. But last week, one of the participants asked a great follow-up question that I hadn’t given much thought to:

If a staff person is researching a particular topic – housing, homelessness, education, food insecurity, etc. – is there a difference between using Google versus Claude, ChatGPT, or a similar AI chat box?

On the surface, there doesn’t seem to be much of a difference, especially now that Google automatically gives you an AI summary based on your search terms. However, there are important distinctions nonprofits should be aware of when conducting background research for policy, advocacy, needs assessments, and other reports.

First, search engines and AI work differently. A traditional search engine, like Google, works by trying to match keywords in your search query to different websites, social media posts, videos, and images that it has already collected in an index. Once Google finds and returns the matches, it’s up to you to view the information and find the answers to your questions.

Generative AI, on the other hand, is in the business of trying to understand meaning and creating content for you to answer your questions. AI essentially takes your questions or search queries and generates the answers for you. This might work well if, say, you already have a group of reports and other documents you would like to summarize for easier comprehension. However, if we’re asking AI to do the work for us, there are several things we need to be aware of:

  1. AI-generated responses often don’t include what sources they are using to generate the answer, making it challenging to confirm the accuracy of their responses.
  2. AI “hallucinations” are common – ask any attorney. There are multiple documented cases of AI making up legal citations, legal cases, and incorrect legal analyses that have gotten several attorneys in hot water.
  3. People created AI, and all people have biases. As a result, there may be biases baked into AI that influences its answers that aren’t immediately apparent to its users.

This is not to say that there are no issues with traditional search engines. First, Google has also been accused of having biases in its results pages, though the research is limited and mixed. Biased results may have more to do with how Google makes guesses about what you want to hear based on the keywords we use to conduct searches. Also, the existence of Google Ads means anyone can pay to boost their content to the top of the search results. It doesn’t mean that the sponsored result is the single best match for your search; it just means that someone paid for that piece of content to reach the top of your list.

From my perspective, there are more risks in using AI as a search engine because it typically does not cite its sources, and the processes behind how AI generates content are still opaque to the average person. However, regardless of which technology we choose to use, it is up to us to ensure that the sources we use in our reports – internal and external – are based on reputable information from reputable sources. Examples of reputable sources include peer-reviewed research, grey literature published by “government, business, or academic organizations”, and trade literature aimed at professionals in specific fields.

There’s no reason to risk the reputation of your organization on shortcuts and bad information. Doing our due diligence is the best defense against potential problems brought about by imperfect technology still working out its kinks.

Download

AI chat box vs. Google search: What's the difference?

In the third workshop of the Data Coaching program, we talk about data analysis techniques and pitfalls. Of course, since AI is such a hot topic, we also discuss the ethical and environmental implications of using AI as a data analysis tool. But last week, one of the participants asked a great follow-up question that I hadn’t given much thought to:

If a staff person is researching a particular topic – housing, homelessness, education, food insecurity, etc. – is there a difference between using Google versus Claude, ChatGPT, or a similar AI chat box?

On the surface, there doesn’t seem to be much of a difference, especially now that Google automatically gives you an AI summary based on your search terms. However, there are important distinctions nonprofits should be aware of when conducting background research for policy, advocacy, needs assessments, and other reports.

First, search engines and AI work differently. A traditional search engine, like Google, works by trying to match keywords in your search query to different websites, social media posts, videos, and images that it has already collected in an index. Once Google finds and returns the matches, it’s up to you to view the information and find the answers to your questions.

Generative AI, on the other hand, is in the business of trying to understand meaning and creating content for you to answer your questions. AI essentially takes your questions or search queries and generates the answers for you. This might work well if, say, you already have a group of reports and other documents you would like to summarize for easier comprehension. However, if we’re asking AI to do the work for us, there are several things we need to be aware of:

  1. AI-generated responses often don’t include what sources they are using to generate the answer, making it challenging to confirm the accuracy of their responses.
  2. AI “hallucinations” are common – ask any attorney. There are multiple documented cases of AI making up legal citations, legal cases, and incorrect legal analyses that have gotten several attorneys in hot water.
  3. People created AI, and all people have biases. As a result, there may be biases baked into AI that influences its answers that aren’t immediately apparent to its users.

This is not to say that there are no issues with traditional search engines. First, Google has also been accused of having biases in its results pages, though the research is limited and mixed. Biased results may have more to do with how Google makes guesses about what you want to hear based on the keywords we use to conduct searches. Also, the existence of Google Ads means anyone can pay to boost their content to the top of the search results. It doesn’t mean that the sponsored result is the single best match for your search; it just means that someone paid for that piece of content to reach the top of your list.

From my perspective, there are more risks in using AI as a search engine because it typically does not cite its sources, and the processes behind how AI generates content are still opaque to the average person. However, regardless of which technology we choose to use, it is up to us to ensure that the sources we use in our reports – internal and external – are based on reputable information from reputable sources. Examples of reputable sources include peer-reviewed research, grey literature published by “government, business, or academic organizations”, and trade literature aimed at professionals in specific fields.

There’s no reason to risk the reputation of your organization on shortcuts and bad information. Doing our due diligence is the best defense against potential problems brought about by imperfect technology still working out its kinks.

Fill out the form to get your free guide!

Full Name*
Email*

Thanks for opting in!

Download Your Resource
Oops! Something went wrong while submitting the form.
Resources

Check Out Other Resources

see all
What is fractional data support, and how can it help your nonprofit?
Learn about what fractional data services are and the reasons nonprofits have come to us for this kind of support.
The Data Coach featured in Voyage Denver
What is statistical significance and does your project need it?
When nonprofit leaders come to me with project ideas, one of their concerns is ensuring the research is statistically significant. However, there's often confusion about what this phrase really means.
illustration of two people looking through various data models

Is your data working for you?

Take our free Data Audit Checklist quiz to evaluate your current data practices and discover immediate improvement areas.

Take The Quiz