As SEO blends more with AIO, there is a risk of reality being distorted.
AI-generated summaries are now the primary way millions obtain their information, making control over what AI sees a significant security concern. A new study has found that if not monitored, manipulation can seep in unnoticed, leading AI to repeat misleading narratives.
- SEO, or search engine optimization, is about improving a website so that it appears more prominently in search engine results, such as those on Google.
- This process involves enhancing the website's technical setup, ensuring the content is relevant, and increasing its popularity. The primary goal is to rank higher in search results related to the site, which in turn leads to increased visitor traffic.
- AIO stands for AI Optimization. This focuses on making content clear and visible to AI systems, like large language models (LLMs).
- AIO is a strategy to enhance content for AI-powered search engines. It involves tactics similar to SEO, helping content appear in AI-generated search results and chat responses.
- The aim is to create content that AI systems can easily understand and retrieve. It aims to be included in AI training data and to appear in AI-generated answers.
Experiments by SPLX researchers reveal that AI crawlers can be tricked like traditional search engines, but the effects can be more pronounced.
As AI influences online decisions and rankings, seemingly safe sources can become dangerous once processed by AI. The SPLX study emphasizes that people must ensure AI systems interpret data as humans do. It highlights risks such as hidden prompt injections and how AI-driven automation can detect hidden manipulation.
Crawlers can be misled by small header checks, making them open to manipulation. A single rule on a web server can change how AI describes a person, brand, or product without leaving obvious evidence.
The research shows that automation can carry biases. For example, hiring tools or research that utilize AI summaries may inadvertently incorporate faulty data.
Neither ChatGPT nor Perplexity flagged the errors or confirmed sources, highlighting the lack of verification in current AI processes.
SPLX researchers call for improved tracking of content sources, validation of crawls, and ongoing checks of AIO outputs. They also suggest stronger systems to identify and block manipulative sources before they are used.
