My Experience Building AI Agents With Manus and Genspark
In 2025, AI agents like GenSpark are transforming how we interact with technology. This article delves into the current state of AI agents, highlighting their capabilities, limitations, and the journey from promise to reality.

Building AI agents for social media monitoring in 2025 reveals significant limitations in current platforms like Manus AI and GenSpark. Despite promises of automation, these agents struggle with API authentication, rate limiting, and frequently hallucinate capabilities they don't possess, making them unsuitable for reliable business tasks
People keep saying 2025 is the "year of AI agents." You connect a LLM with some tools(like your mail or calendar) you create an AI agent, that can schedule a meeting all by itself, without needing constant re-prompting.
They promise to automate work and make our lives easier.
At Labellerr, where we focus on making data annotation easier, I saw a perfect job for one of these agents.
I spend a lot of time manually checking different websites to find people or companies asking for help with data labeling or annotation.
It's important work, but it takes hours! I thought, "This is exactly what an AI agent should do!"
So, I decided to build one. My goal was simple: "create an agent that could automatically check social media and notify me."
My journey trying to build this seemingly straightforward agent showed me exactly why these powerful AI tools aren't quite ready to take over the world just yet.
What Problem Does AI Agent Social Media Monitoring Solve?
AI agents for social media monitoring automate the time-consuming process of checking LinkedIn, Twitter, and Reddit for relevant business opportunities. Instead of manually searching for hours, these agents can monitor platforms continuously and send instant alerts when relevant content appears, potentially saving 10-15 hours per week of manual work

Posts Example
Manually checking LinkedIn, Twitter, and Reddit every day is time-consuming. People post requests for data annotation help, ask about tools, or look for services, but finding these posts is like searching for needles in a haystack.
I wanted an AI agent to automate this. Ideally, the agent would:
- Check LinkedIn, Twitter, and Reddit every hour.
- Look for new posts containing keywords like "need data labeling help," "data annotation services," "looking for annotation tools," etc.
- If it found a relevant post, it would immediately send me an alert, maybe through Slack or email.
This seemed like a perfect task for an AI agent – repetitive searching and simple notification.
How to Build AI Agents from Scratch: My Failed Attempt
Building AI agents from scratch requires navigating complex API restrictions, authentication challenges, and rate limiting policies that social media platforms actively enforce. LinkedIn's OAuth2 security, Reddit's strict rate limits, and Twitter's $5,000/month API costs make DIY development financially and technically prohibitive for most businesses
Local Agent Development
My first thought was to build it locally.
So I planned a setup: a simple React frontend to control it and a Python backend to do the searching.
For searching, I'd use libraries like BeautifulSoup (for general web scraping, though tricky on dynamic sites), PRAW (for Reddit), and Tweepy (for Twitter).
I hit roadblocks almost immediately:
- LinkedIn Login Hell: Getting the agent to log into LinkedIn automatically was a nightmare. Their security (using OAuth2) kept blocking my attempts, throwing errors like "403 Forbidden." I couldn't get reliable access.
- Reddit Said 'Stop!': My Reddit searching script worked for about five minutes, then got blocked. Reddit has "rate limits" – rules about how often an automated tool can check for new posts. My agent was checking too frequently.
- Twitter's Price Tag: Twitter's API (the official way for programs to interact with it) had changed. To get the kind of access I needed just to read posts reliably cost a shocking $5,000 per month! That was far too expensive for this project.
Building from scratch was proving difficult, mainly because the platforms themselves make it hard for automated tools to get the data.
Attempt 2: Trying the Manus AI Agent Platform
Manus Agent
Manus AI promises no-code social media scraping but delivers broken deployment links, confusing pseudo-YAML configuration, and estimated costs of $300/month for basic monitoring. The platform's tendency to hallucinate deployment details and generate non-functional web links demonstrates the gap between AI agent marketing claims and actual capabilities
Okay, building from scratch was tough. So, I looked at existing "general purpose" AI agent platforms. Manus AI caught my eye.
It promised a "no-code social media scraper" and claimed it could "auto-deploy" the agent to run online easily (using AWS Lambda). Sounds perfect, right?
The reality was quite different:
- Confusing 'No-Code': The user interface wasn't exactly drag-and-drop. Setting up the agent involved writing instructions that felt a lot like complicated configuration files (pseudo-YAML). It wasn't intuitive.
- Broken Links and Hallucinations: When I tried to deploy the agent, the process seemed to work, but the web links Manus AI generated for the running agent were broken. They didn't lead anywhere. It felt like the AI had just hallucinated the deployment details.
- Unexpected Costs: Even if it had worked, Manus AI estimated the cost for this basic monitoring would be around $300 per month. That still felt high for such a simple task.
Manus AI promised simplicity but delivered complexity, broken results, and significant cost.
You can view the replay of this.
Attempt 3: Giving GenSpark AI a Shot

Genspark Agent
GenSpark AI creates professional-looking interfaces but fundamentally fails by using placeholder dummy data instead of real-time social media information. This hallucination problem—where AI agents present fabricated data as factual—affects up to 27% of AI-generated content and makes current agent platforms unreliable for business-critical tasks
I wasn't ready to give up yet. I heard about GenSpark AI, positioning itself as a more advanced, "autonomous multi-agent system" with lots of integrations supposedly built-in. Maybe this newer generation of agent could handle it?
Here’s what happened with GenSpark:
GenSpark built a very sleek, professional-looking user interface for my monitoring agent. It looked great!
But when I ran it, the examples it showed me – posts supposedly asking for data annotation help on Reddit and LinkedIn – were completely fake.
It was using placeholder dummy data, not real-time information.
GenSpark looked promising on the surface but failed on the most critical parts: getting real data and handling real logins. It hallucinated key functionalities.
You can watch the chat here.
Why AI Agents Fail in 2025: Key Lessons Learned?
AI agents in 2025 face three critical limitations: API access restrictions from social platforms, authentication challenges with OAuth systems, and the tendency to hallucinate capabilities they don't possess. These issues, combined with high costs ranging from $40,000-$200,000+ for development and poor error handling, make current AI agents unsuitable for reliable automation tasks
Trying these different approaches taught me some valuable lessons about the state of AI agents in 2025:
APIs Are Still the Biggest Hurdle
Social media platforms actively prevent AI agent access through complex OAuth2 authentication, strict rate limits that throttle automated requests, and API pricing that can exceed $5,000/month. These technical barriers exist by design to protect platforms from automated scraping and maintain control over data access, with Twitter's API changes being particularly restrictive
The main bottleneck isn't always the AI's intelligence; it's getting reliable access to the data sources.
Social media platforms, in particular, actively limit automated access through complex logins (OAuth), strict rate limits, and sometimes very high API costs.
Even advanced agents like GenSpark struggled to overcome these real-world barriers.
Agents Overpromise on Integration (Hallucination vs. Execution)
AI agent platforms frequently hallucinate integration capabilities, claiming to connect with services they cannot actually access. This occurs because large language models cannot distinguish between describing a capability and possessing it, leading to fabricated deployment links and non-functional features in up to 27% of interactions
Many agent platforms claim they can easily connect to various services. However, they often fail when it comes to the tricky details of actual authentication, error handling, and adapting to platform-specific rules.
They frequently hallucinate that they have capabilities they don't possess.
Cost vs. Real Value
Current AI agent platforms charge $300-5,000+ monthly for capabilities that often don't work reliably, making manual processes more cost-effective than automated solutions. The total cost of ownership includes not just platform fees but also debugging time, failed deployments, and potential business risks from hallucinated outputs that can damage brand reputation
Right now, building and running an AI agent that is truly reliable for tasks involving restricted external platforms can often cost more (in development time, debugging, and potential service fees) than simply doing the task manually or with less "autonomous" tools.
Conclusion
The "year of AI agents" faces reality: current platforms struggle with authentication, hallucinate capabilities, and cannot reliably handle real-world integration challenges. Until AI agents can manage OAuth systems, respect rate limits, and stop fabricating functionality, human oversight remains essential for any business-critical automation task
We are definitely making exciting progress with AI agents. The idea of AI handling complex, multi-step tasks autonomously is closer than ever.
However, my attempt to build a relatively simple social media monitor showed that we're not quite there yet.
Agents still struggle significantly with real-world integration, especially navigating the complex and often restrictive rules of external platforms and APIs.
They hallucinate functionality and often fail to handle errors robustly.
Until AI agents can reliably manage authentication, respect rate limits, avoid making things up, and navigate the specific quirks of each platform they interact with, human oversight and intervention (human-in-the-loop systems) remain absolutely essential.
The "year of the agent" is exciting, but the fully autonomous future still requires more development.
PS: If you’ve actually built an agent that reliably solves this social media monitoring problem for data annotation leads, please message me – I would genuinely love to beta test it! 🔥
Will you take over the world, AI? Yes, I will.
byu/kevivm inmemes
FAQs
Q1: What is GenSpark?
GenSpark is an AI-powered platform that utilizes multiple specialized agents to provide comprehensive search results and assist with various tasks, aiming to revolutionize information retrieval.
Q2: How does GenSpark differ from traditional search engines?
Unlike traditional search engines that list links, GenSpark generates custom pages called Sparkpages in real-time, offering synthesized and personalized information.
Q3: What are the limitations of GenSpark?
While GenSpark offers advanced features, it can experience longer response times for deep research tasks and may have limited customization options for users.
Q4: Are AI agents ready for business use in 2025?**
A: No, AI agents are not yet reliable for critical business tasks due to API restrictions, authentication challenges, and hallucination issues that affect up to 27% of outputs.
Q5: What's the biggest challenge with AI agent platforms?
A: The biggest challenge is that AI agents often hallucinate capabilities they don't possess, especially regarding real-time data access and platform integration.
Q6: How much does it cost to build a social media monitoring AI agent?
A: Costs range from $40,000-200,000+ for development, plus $300-5,000/month for platform fees, making it more expensive than manual processes.
Q7: Why do social media platforms block AI agents?
A: Platforms use OAuth2 authentication, rate limiting, and high API costs (like Twitter's $5,000/month) to prevent automated scraping and maintain data control

Simplify Your Data Annotation Workflow With Proven Strategies
.png)
