For one week this summer, Taylor and her roommate wore GoPro cameras strapped to their foreheads as they painted, sculpted, and did household chores. They were training an AI vision model, carefully syncing their footage so the system could get multiple angles on the same behavior. It was difficult work in many ways, but they were well paid for it — and it allowed Taylor to spend most of her day making art.
“We woke up, did our regular routine, and then strapped the cameras on our head and synced the times together,” she told me. “Then we would make our breakfast and clean the dishes. Then we’d go our separate ways and work on art.”
They were hired to produce five hours of synced footage each day, but Taylor quickly learned she needed to allot seven hours a day for the work, to leave enough time for breaks and physical recovery.
“It would give you headaches,” she said. “You take it off and there’s just a red square on your forehead.”
Taylor, who asked not to give her last name, was working as a data freelancer for Turing Labs, an AI company which connected her to TechCrunch. Turing’s goal wasn’t to teach the AI how to make oil paintings, but to gain more abstract skills around sequential problem-solving and visual reasoning. Unlike a large language model, Turing’s vision model would be trained entirely on video — and most of it would be collected directly by Turing.
Alongside artists like Taylor, Turing is contracting with chefs, construction workers, and electricians — anyone who works with their hands. Turing Chief AGI Officer Sudarshan Sivaraman told TechCrunch the manual collection is the only way to get a varied enough dataset.
“We are doing it for so many different kinds of blue-collar work, so that we have a diversity of data in the pre-training phase,” Sivaraman told TechCrunch. “After we capture all this information, the models will be able to understand how a certain task is performed.”
Techcrunch event
San Francisco
|
October 27-29, 2025
Turing’s work on vision models is part of a growing shift in how AI companies deal with data. Where training sets were once scraped freely from the web or collected from low-paid annotators, companies are now paying top dollar for carefully curated data.
With the raw power of AI already established, companies are looking to proprietary training data as a competitive advantage. And instead of farming out the task to contractors, they’re often taking on the work themselves.
The email company Fyxer, which uses AI models to sort emails and draft replies, is one example.
After some early experiments, founder Richard Hollingsworth discovered the best approach was to use an array of small models with tightly focused training data. Unlike Turing, Fyxer is building off someone else’s foundation model — but the underlying insight is the same.
“We realized that the quality of the data, not the quantity, is the thing that really defines the performance,” Hollingsworth told me.
In practical terms, that meant some unconventional personnel choices. In the early days, Fyxer engineers and managers were sometimes outnumbered four-to-one by the executive assistants needed to train the model, Hollingsworth says.
“We used a lot of experienced executive assistants, because we needed to train on the fundamentals of whether an email should be responded to,” he told TechCrunch. “It’s a very people-oriented problem. Finding great people is very hard.”
The pace of data collection never slowed down, but over time Hollingsworth became more precious about the data sets, preferring smaller sets of more tightly curated datasets when it came time for post-training. As he puts it, “the quality of the data, not the quantity, is the thing that really defines the performance.”
That’s particularly true when synthetic data is used, magnifying both the scope of possible training scenarios and the impact of any flaws in the original dataset. On the vision side, Turing estimates that 75 to 80 percent of its data is synthetic, extrapolated from the original GoPro videos. But that makes it even more important to keep the original dataset as high-quality as possible.
“If the pre-training data itself is not of good quality, then whatever you do with synthetic data is also not going to be of good quality,” Sivaraman says.
Beyond concerns of quality, there’s a powerful competitive logic behind keeping data collection in-house. For Fyxer, the hard work of data collection is one of the best moats the company has against competition. As Hollingsworth sees it, anyone can build an open-source model into their product – but not everyone can find expert annotators to train it into a workable product.
“We believe that the best way to do it is through data,” he told TechCrunch, “through building custom models, through high quality, human led data training.”