Sekėjai

Ieškoti šiame dienoraštyje

2026 m. gegužės 13 d., trečiadienis

Notable Researchers Join $4 Billion Effort to Build Self-Improving A.I.


What is the example of a Chinese company, seeking to automate the creation of artificial intelligence?

 

A primary example of a Chinese company seeking to automate the creation of artificial intelligence is 4Paradigm, which focuses heavily on Automated Machine Learning (AutoML) platforms to help enterprises build AI with minimal human intervention.

As of 2026, several Chinese tech giants are advancing this goal:

           Alibaba Cloud: Offers a Machine Learning Platform for AI (PAI) that includes AutoML, designed to automatically optimize algorithms and build custom AI models for enterprise users.

           Baidu: A major player in AutoML, using it to develop AI applications without requiring extensive data science expertise.

           4Paradigm: Uses full-stack AutoML algorithms to handle tedious, manual tasks, aiming to make AI adoption faster and simpler for sectors like finance and manufacturing.

           Tencent/Alibaba: These companies are heavily integrating "agentic" AI—AI that can autonomously create, test, and fix other AI models and code—into their ecosystems.

 

 

 

These firms aim to democratize AI development, allowing businesses to "act as a commander and tell it what to do" rather than hand-coding every element.

 

 

What are open source alternatives for the Chinese company seeking to automate the creation of artificial intelligence, called 4Paradigm?

 

 

Based on the capabilities of 4Paradigm—which focuses on enterprise AutoML, decision-making AI, and automated machine learning (Sage HyperCycle)—the top open-source alternatives in 2026 for automated AI development include H2O.ai, PyCaret, and Auto-Sklearn. These tools provide capabilities for automated feature engineering, model selection, and hyperparameter tuning, which align with 4Paradigm's focus on simplifying AI adoption for enterprises.

 

 

Top Open-Source AutoML Alternatives (2026)

           H2O.ai (Open Source): Highly effective for automated machine learning, particularly with structured/tabular datasets, offering strong model interpretability.

           PyCaret: An open-source, low-code machine learning library in Python that automates machine learning workflows, making it a strong, lightweight alternative for rapid prototyping.

           Auto-Sklearn: Built on top of Scikit-learn, this framework automates the process of algorithm selection and hyperparameter tuning. Hyperparameter tuning is the process of optimizing a machine learning model's configuration settings—defined before training—to maximize performance and accuracy. It involves testing combinations of parameters, such as learning rates or network architecture, to prevent underfitting/overfitting. Common techniques include manual tuning, Grid Search, Random Search, and Bayesian Optimization. Key Concepts - Parameters vs. Hyperparameters: Parameters (e.g., weights) are learned from data during training, while hyperparameters are set by the engineer beforehand. Why Tune? It directly impacts model structure and function, ensuring the best possible performance on unseen data. Examples: Learning rate, Batch size, Optimizer type (Optimizers are mathematical algorithms that adjust the parameters of a machine learning model, such as weights and biases, to minimize the loss function. By reducing the difference between predicted and actual values, they enable the model to learn and reach its most accurate form.), Depth of trees, Number of hidden layers, and Neurons in a neural network.

           TPOT: An automated pipeline tool that optimizes machine learning pipelines using genetic programming.

           MLJAR: An automated machine learning framework that provides clean, understandable code for data cleaning, feature engineering, and model training.

 

 

Open Source AI Application Builders (2026)

For building end-to-end AI applications similar to 4Paradigm's platform-centric solutions:

           Dify: An open-source, production-ready AI app builder.

           Langflow: A user-friendly tool for creating visual AI workflows.

           n8n: A powerful, self-hostable workflow automation tool for technical teams.

 

 

Key Open-Source Foundation Models (2026)

For organizations looking to deploy open-weight models similar to 4Paradigm's proprietary SageGPT:

           DeepSeek-R1 / V3: A leading Chinese open-weight model recognized for high performance in reasoning.

           Qwen3 / Qwen3-Coder: An advanced open-weight model from Alibaba suitable for agentic workflows.

           GLM-4: A highly capable open-source model developed by Zhipu AI.

 

 

Comparative Overview

While 4Paradigm is a top proprietary, enterprise-level AI vendor in China, these open-source alternatives allow organizations to maintain data sovereignty and reduce licensing.

 

 

In America this thing is not mature yet:

 

“Recursive Superintelligence, founded by former Google, Meta and OpenAI researchers, is part of a growing effort to automate the creation of artificial intelligence.

 

Companies like Anthropic and OpenAI released new A.I. systems late last year that were particularly good at writing computer code.

 

In recent months, the technology has rapidly remade the way that Silicon Valley’s engineers build, test and modify new software applications. If an artificial intelligence system can write code, it can help accelerate the development of things as varied as word processors and social media apps.

 

Now, many of the world’s leading researchers believe that A.I. will soon be powerful enough to improve itself with little or no help from human developers.

 

“A.I. is code. And now, A.I. can code,” a veteran researcher, Richard Socher, said. “The ingredients are there.”

 

Dr. Socher recently founded, with seven other researchers, a company to pursue this mind-bending goal, which is often called “recursive self-improvement.”

 

His start-up, Recursive Superintelligence, has raised more than $650 million from venture capital firms including Google Ventures and Greycroft and the chip-making giants Nvidia and AMD. The six-month-old company, which has offices in San Francisco and London, has fewer than 30 employees. But it is now valued at more than $4 billion.

 

The company should not be confused with Ricursive Intelligence, which is pursuing a similar goal and is also valued at $4 billion. The prominent A.I. start-ups Anthropic and OpenAI are also chasing recursive self-improvement, which has been an obsession among Silicon Valley technologists for decades.

 

Dr. Socher was previously head of A.I. research at the business software maker Salesforce and chief executive of the A.I. start-up You.com. His seven co-founders include notable researchers from many of the industry’s leading A.I. companies, including Josh Tobin, Jeff Clune and Tim Shi, all from OpenAI, and Yuandong Tian from Meta.

 

Many of these researchers specialize in a kind of A.I. development called “open-endedness.” This involves building software systems that can run for days, months or even years in pursuit of goals set by the researchers.

 

Recursive Superintelligence has also hired Peter Norvig, who spent 25 years as director of research at Google and co-wrote an A.I. textbook (“Artificial Intelligence: A Modern Approach”) that has been a standard inside universities for three decades.

 

Recursion, a term that is common among mathematicians and computer programmers, refers to a mathematical function that feeds itself. After a recursive procedure generates information, it uses that information to generate something else — and so on.

 

Though many researchers are bullish on the idea of A.I.’s recursively improving itself, others point out the current technology is long way from the point where humans can be removed from the loop. Humans — like Dr. Socher — must still generate the new ideas that drive A.I. development forward.

 

The aim, however, is to push more and more work onto machines, including the generation of new ideas.

 

OpenAI has said it is now building an “automated A.I. researcher.” By the fall, the company hopes to have a system that can do the work of a “less experienced” researcher, said Sam Altman, OpenAI’s chief executive. Similar efforts are underway at other leading companies.

 

Dr. Socher said his start-up would need years to build the kind of technology that he and his co-founders envisioned. The company hopes to eventually apply the technology to other fields, such as drug discovery and other kinds of biological research.

 

(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)” [1]

 

1. Notable Researchers Join $4 Billion Effort to Build Self-Improving A.I. Metz, Cade.  New York Times (Online) New York Times Company. May 13, 2026.

Komentarų nėra: