Skip to content

A Critical Comparison of Grok, Gemini, OpenAI, Anthropic, and Perplexity

Posted on:Alvar Laigna | October 30, 2025 at 10:00 AM

In the rapidly evolving world of artificial intelligence, choosing the right provider can make or break your personal, business, or institutional projects. As someone who has had firsthand negative experiences with certain players, and heard similar stories from friends and colleagues. I have conducted a deep investigation into the key aspects of customer support, privacy, data use, best practices, and overall business behaviors for five major AI providers: Grok (from xAI), Gemini (Google), OpenAI, Anthropic, and Perplexity.

Photo by Unsplash

My goal here is straightforward: praise where it is due, but shine a harsh light on problematic practices. While Grok and xAI stand out for their commitment to openness and Google’s Gemini offers a robust ecosystem, the landscape reveals a troubling gap between marketing promises and reality. Most alarmingly, OpenAI’s track record is riddled with security breaches, toxic leadership, deceptive business practices, and a fundamental disregard for user privacy that makes it unreliable and untrustworthy for any serious use. Let me break it down.

Customer Support: A Chasm Between Promises and Reality

Customer support is the frontline of any AI service. When things go wrong, you need quick, human assistance, not endless loops of automated responses that fail to resolve critical issues. My research uncovered a significant gap between user expectations and the support actually provided by several major players.

ProviderSupport QualityKey Findings
Grok (xAI)AmazingPraised for being “honest and straightforward.” Enterprise API offers dedicated engineering teams. Generally positive sentiment with responsive support.
Gemini (Google)GoodLeverages Google’s extensive 24/7 support infrastructure. Generally reliable, transparent, and well-resourced.
OpenAIAbysmalOver 274 BBB complaints in three years. Nonexistent human support. Users report ignored billing issues, locked accounts without explanation, and complete lack of recourse. NOT BBB Accredited.
AnthropicGoodDespite marketing as user-centric, users describe support as problematic.
PerplexityGoodNon-responsive support with users calling it “unhelpful”.

OpenAI represents a case study in how not to treat customers. The Better Business Bureau has been flooded with complaints, with one user calling it an “unacceptable business practice.” My own experiences, and those of my friends, mirror this sentiment: accounts locked without explanation, queries ignored for over a month, and a complete absence of human support. For any serious business application, this level of unreliability is a deal-breaker.

A critical note on Anthropic: My initial research praised Anthropic for support based on their marketing. However, real-world user experiences tell a very different story. Multiple sources, including Trustpilot and Reddit, reveal widespread complaints about bad customer service. As my experiences with them has been great then it’s hard to go further here.

Privacy and Data Use: Who Can You Trust With Your Information?

Privacy is not just a buzzword. It is about how your data is collected, handled, stored, shared, and potentially exploited. The differences between providers are stark and consequential.

Grok (xAI): Transparent and User-Focused

xAI’s privacy policy is refreshingly straightforward. The company commits to not selling user data or using it for advertising. Conversations are deleted by default within 30 days, and users have strong rights to access or erase their data. This transparency aligns with xAI’s stated mission to build “maximally beneficial” AI. However, a reported privacy incident in August 2025 serves as a reminder that no system is perfect, and vigilance remains essential.

Gemini (Google): Comprehensive Controls

Google’s privacy framework is comprehensive and well-documented. The company provides granular user controls for activity data and commits not to sell personal information. While data is used for service improvements, opt-outs are available. Some concerns exist around human reviewers accessing anonymized data, which can be retained for up to three years, but overall the approach is transparent and user-respecting.

Anthropic: A Troubling Regression

In what has been described as a “massive privacy regression,” Anthropic recently updated its privacy policy to use all user conversations for AI training by default unless users explicitly opt out. This represents a significant departure from their previous privacy-first positioning. The company retains inputs and outputs for up to two years, and trust and safety classification scores for up to seven years if flagged. This shift raises serious questions about their commitment to the privacy principles they once championed.

Perplexity: Mixed Signals

Perplexity offers a Zero Data Retention Policy for its Sonar API, which is a strong commitment for developers. However, some users have raised concerns about potential keystroke tracking in consumer-facing products. The company uses aggregated, anonymized data for improvements and complies with CCPA requirements, but the mixed signals warrant caution.

OpenAI: Alarmingly Careless and Dangerous

OpenAI’s approach to privacy and data security can only be described as reckless. The litany of issues is extensive and deeply troubling.

Massive Data Scraping Without Consent: A class-action lawsuit alleges that OpenAI “secretly scraped 300 billion words from the internet, including personal information obtained without consent.” This indiscriminate collection of data from books, articles, websites, and other sources forms the foundation of its models and was conducted without informed consent or notification to millions of individuals whose data was harvested.

Broad Use of User Prompts for Training: OpenAI’s privacy policy allows the company to use user prompts and interactions for ongoing model training. While an opt-out mechanism exists, critics describe it as “incomplete.” The policy explicitly states that OpenAI “may provide [users’] Personal information to third parties without further notice,” a clause that should alarm any privacy-conscious user.

Over 1,000 Documented Security Breaches: According to data from Cybernews, OpenAI has been breached more than 1,000 times, with a report documenting 1,140 specific instances. This is not theoretical risk—it reflects a documented pattern of major security failures:

Employee Data Leakage: A recent report found that 77% of employees leak sensitive company data through ChatGPT, highlighting how easily confidential corporate information is funneled into OpenAI’s systems. This represents a catastrophic failure of data integrity.

Ongoing FTC Investigation: The Federal Trade Commission is investigating OpenAI for “unfair or deceptive privacy or data security practices.” The Electronic Privacy Information Center (EPIC) filed a formal complaint alleging that OpenAI engages in unfair and deceptive trade practices.

Disregard for Privacy Rights: OpenAI’s compliance with privacy regulations like GDPR appears deficient. The company severely restricts the ability of non-EU users to request data deletion and generally does not permit users to delete personal data included in core training datasets.

This is not merely sloppy—it represents a fundamental disrespect for user data and a pattern of behavior that treats personal information as a resource to be exploited rather than a responsibility to be protected.

Best Practices and Business Behaviors

Beyond technology and privacy policies, the business practices and leadership behaviors of these companies reveal their true character and values.

Grok (xAI): Commitment to Openness

xAI is exemplary in its commitment to building transparent, beneficial AI. The company offers enterprise and business solutions with strong controls and real-time relevance through integration with X. Their approach prioritizes user trust and open communication.

Gemini (Google): Solid and Compliant

Google maintains strong compliance standards and leverages its vast resources for innovation in education and business contexts. The company provides robust tools and maintains transparency in its operations.

Anthropic: Technical Excellence, Operational Concerns

While Anthropic’s technology is powerful and their initial focus on AI safety was laudable, recent privacy policy changes and poor customer support raise questions about their operational commitment to user-centric principles.

Perplexity: Research-Focused with Limitations

Perplexity’s focus on citation transparency and tools for deep research represents a valuable contribution to the AI landscape. However, customer support issues limit their overall reliability. As a sidenote, I love their latest Comet browser. You can try it too using my invite link https://pplx.ai/laigna.

OpenAI: A Pattern of Deception and Disrespect

OpenAI’s business practices reveal a troubling pattern that extends far beyond technical issues:

Abandonment of Founding Mission: OpenAI was founded as a nonprofit with the explicit mission to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” That noble mission has been systematically abandoned. The organization transformed into a “capped-profit” entity, shifted from open-source to closed-source models, and entered into a $13 billion partnership with Microsoft. As Vice put it, OpenAI “is now everything it promised not to be: corporate, closed-source, and for-profit.”

Toxic Leadership and Psychological Abuse: The November 2023 ouster of CEO Sam Altman was not merely about strategic disagreements. Senior employees approached the board alleging that Altman had been “psychologically abusive.” Former OpenAI executives Dario and Daniela Amodei, who left to found Anthropic, described Altman’s tactics as “gaslighting” and “psychological abuse.” Former board member Helen Toner confirmed that two executives reported “psychological abuse” from Altman directly to the board with supporting documentation. She further alleged that Altman had lied to the board, withholding information about ChatGPT’s release and his ownership of OpenAI’s startup fund. These serious concerns about toxic work environment, manipulation, and lack of candor directly contributed to his firing.

Secretive and Authoritarian Culture: Karen Hao, the first journalist ever embedded inside OpenAI, found the organization “weirdly secretive” even during its nonprofit phase. When she asked basic questions about the company’s premise, leadership “really struggled to answer.” She described the company as driven by a “quasi-religious ideology” and operating in a “techno-authoritarian way.”

Safety Takes a Back Seat: Jan Leike, former co-lead of OpenAI’s superalignment team, resigned in protest, stating that safety culture and processes “take a back seat to shiny products.” This prioritization of product launches over safety considerations is a damning indictment from someone who led safety efforts.

Inherent Bias and Confabulations: OpenAI’s models are known to produce “confabulations” (also called hallucinations)—credible-sounding but entirely false information. In one case cited by EPIC, ChatGPT invented “a half dozen fake court cases” when asked to assist a lawyer. Because models are trained on uncurated internet data, they “systematically produce outputs biased against historically disadvantaged groups,” with serious ramifications in banking, hiring, and real estate.

Facilitating Third-Party Harm: OpenAI disseminates its flawed and bias-prone technology to millions of third-party developers via APIs without sufficient guardrails or verification of security measures. This provides the “means and instrumentalities” for others to cause harm at scale.

Trade Secret Theft Allegations: xAI has accused OpenAI of stealing trade secrets by hiring away former xAI employees to gain access to confidential information about the Grok chatbot.

This pattern of behavior—from abandoning founding principles to toxic leadership to reckless data handling—reveals a company that cannot be trusted with sensitive information or critical business operations.

My Ranking: From Best to Avoid

Based on this comprehensive investigation, here is my ranking:

  1. Grok (xAI) – Superior openness, strong enterprise support, real-time awareness, and transparent privacy practices.
  2. Gemini (Google) – Reliable support, robust ecosystem, comprehensive privacy controls, and strong compliance.
  3. Perplexity – Valuable for research with citation features.
  4. Anthropic – Powerful technology for research and coding.
  5. OpenAI – Unreliable, untrustworthy, and a liability for any serious use case.

A Critical Warning for Government, Education, and Business

Government bodies and educational institutions must be particularly cautious. The sensitive nature of their data makes them prime targets for the kind of careless data handling and security lapses that have plagued OpenAI. I strongly urge these organizations to steer clear of OpenAI entirely and instead explore the offerings from Grok, Gemini, and Perplexity, which provide dedicated enterprise and educational solutions with greater emphasis on security, transparency, and control.

For businesses, the message is equally clear: never rely on OpenAI. The combination of nonexistent customer support, over 1,000 documented security breaches, toxic leadership, deceptive business practices, and fundamental disregard for user privacy creates unacceptable risks. Experts warn against using OpenAI for business applications due to compliance gaps and ethical risks. Choose providers that demonstrate genuine commitment to user trust and data protection.

Beyond the Big Players

While the above are the main contenders, consider these specialized tools for specific needs:

Conclusion: Reconsidering the Cost of Progress

A profound disconnect exists between OpenAI’s public image as a benevolent innovator and the documented reality of its security failures, toxic leadership, and unethical data practices. The company that promised to build safe AI for the benefit of all humanity has repeatedly prioritized “shiny products” over the safety, privacy, and trust of its users.

As AI becomes more deeply embedded in our lives, we must hold these powerful new entities accountable and ensure the technologies shaping our future are built on a foundation of genuine transparency and trust, not just hype and broken promises.

AI should empower, not endanger. Choose wisely—your data, your business, and your sanity depend on it.


This article also appeared on my Medium. You can also read all of my articles here on my web.

Solutions I support (affiliate)

References

[1] OpenAI, LLC | BBB Complaints | Better Business Bureau. Retrieved October 30, 2025.

[2] 77% of Employees Leak Data via ChatGPT, Report Finds. (2025, October 9). eSecurityPlanet.

[3] xAI Privacy Policy. (2025, July 10). xAI.

[4] Updates to Consumer Terms and Privacy Policy. (2025, August 28). Anthropic.

[5] Gemini Apps Privacy Hub. (2025, October 22). Google.

[6] Privacy & Security - Zero Data Retention Policy. Perplexity.

[7] Cybernews. OpenAI Breach Data. Referenced in security analysis reports, 2024-2025.

[8] EPIC Complaint to FTC In re Open AI. (2024, October 29). Electronic Privacy Information Center.