Tuesday, 25 March 2025
26.2 C
Singapore
32.8 C
Thailand
22.1 C
Indonesia
27.5 C
Philippines

Confluent expands Confluent Cloud for Apache Flink to boost real-time AI development

Confluent upgrades Confluent Cloud for Apache Flink with new AI tools, simplifying real-time app development and improving data processing.

Confluent has unveiled a set of new features for its Confluent Cloud for Apache Flink, designed to simplify the development of real-time AI applications. The enhancements include Flink Native Inference, Flink Search, and built-in machine learning (ML) functions, offering developers a more streamlined experience in deploying and managing AI models within a single platform.

The announcement, made on 19 March in Singapore, highlights Confluentโ€™s continued efforts to bring powerful AI capabilities to organisations without requiring deep technical expertise or fragmented tools.

Shaun Clowes, Chief Product Officer at Confluent, said, โ€œBuilding real-time AI applications has been too complex for too long, requiring a maze of tools and deep expertise just to get started. With the latest advancements in Confluent Cloud for Apache Flink, weโ€™re breaking down those barriersโ€”bringing AI-powered streaming intelligence within reach of any team. What once required a patchwork of technologies can now be done seamlessly within our platform, with enterprise-level security and cost efficiencies baked in.โ€

Addressing the complexity of real-time AI

According to research from McKinsey, 92% of businesses plan to increase their AI investments in the next three years. However, the journey to building AI applications remains challenging. Many developers currently navigate an ecosystem filled with multiple tools, programming languages, and interfaces to deploy ML models and extract context from various data sources. This complexity often leads to inefficiencies, operational delays, and unreliable AI outputsโ€”sometimes referred to as โ€œAI hallucinations.โ€

To address this, Confluentโ€™s updated Flink features are focused on three core areas: inference, search, and accessibility.

Flink Native Inference allows development teams to run open source AI models directly within Confluent Cloud. This reduces the need for managing additional infrastructure, making it easier to maintain security and efficiency while running ML models. Since inference takes place within the platform, it adds another layer of data protection and lowers costs.

Flink Search, another new addition, gives developers a single interface to access data across multiple vector databases like MongoDB, Elasticsearch, and Pinecone. These vector searches are essential in helping large language models (LLMs) provide accurate and relevant responses. By simplifying the process of retrieving context-rich data, Flink Search eliminates the need for complex ETL (Extract, Transform, Load) pipelines and manual data handling.

Making data science more accessible

The built-in ML functions in Flink SQL bring common AI use cases such as forecasting, anomaly detection, and real-time data visualisation into reach for users without advanced data science backgrounds. By embedding these features directly into SQL workflows, Confluent enables teams across industries to generate insights and improve decision-making faster.

Commenting on the platformโ€™s impact, Steffen Hoellinger, Co-founder and CEO at Airy, said, โ€œConfluent helps us accelerate copilot adoption for our customers, giving teams access to valuable real-time, organisational knowledge. Confluentโ€™s data streaming platform with Flink AI Model Inference simplified our tech stack by enabling us to work directly with large language models (LLMs) and vector databases for retrieval-augmented generation (RAG) and schema intelligence, providing real-time context for smarter AI agents. As a result, our customers have achieved greater productivity and improved workflows across their enterprise operations.โ€

Stewart Bond, Vice President of Data Intelligence and Integration Software at IDC, added, โ€œThe ability to integrate real-time, contextualised, and trustworthy data into AI and ML models will give companies a competitive edge with AI. Organisations need to unify data processing and AI workflows for accurate predictions and LLM responses. Flink provides a single interface to orchestrate inference and vector search for RAG, and having it available in a cloud-native and fully managed implementation will make real-time analytics and AI more accessible and applicable to the future of generative AI and agentic AI.โ€

More features available in early access

Confluent Cloud for Apache Flink remains the only serverless stream processing solution combining real-time and batch processing in a single platform. With the new AI, ML, and analytics capabilities, businesses can reduce operational overhead and simplify development processes further. These features are now available in an early access programme for current Confluent Cloud users.

Other enhancements in Confluent Cloud include Tableflow, Freight Clusters, Confluent for Visual Studio Code, and the Oracle XStream CDC Source Connector, providing teams with even more tools to manage and process real-time data.

Hot this week

NVIDIA unveils RTX PRO 6000 Blackwell Server Edition for AI and graphics workloads

NVIDIA unveils the RTX PRO 6000 Blackwell Server Edition, a powerful AI and graphics GPU designed to accelerate enterprise workloads.

Elon Musk unveils ambitious plans for Optimus and Cybercabs at Tesla’s surprise meeting

Elon Musk shares new promises about Teslaโ€™s future, including updates on Optimus robots and the Cybercab at a surprise all-hands meeting.

NVIDIA, Alphabet and Google join forces to drive AI innovation

NVIDIA, Alphabet, and Google expand their AI partnership, advancing robotics, drug discovery, and AI-powered infrastructure across key industries.

SBF brings together business leaders to discuss transformation and workforce resilience

SBFโ€™s Budget Symposium 2025 explored strategies for business growth, workforce resilience, and the role of AI in a volatile economy.

NetApp storage validated for NVIDIA DGX SuperPOD, cloud partners, and certified systems

NetAppโ€™s enterprise storage is now validated for NVIDIA DGX SuperPOD, Cloud Partners, and Certified Storage, strengthening AI capabilities for businesses.

Apple Music partners with top DJ tools to expand mixing capabilities

Apple Music now integrates with top DJ software and hardware, giving DJs access to over 100 million songs for seamless mixing and creativity.

Senators urge Trump to work with Congress on TikTok crisis

Trumpโ€™s plan to save TikTok may not be enough, as Democratic senators warn that service providers could face massive legal risks after April 5.

Apple to bring lossless and ultra-low latency audio to AirPods Max with USB-C

In an upcoming update, Apple is adding lossless and ultra-low latency audio to the AirPods Max with USB-C, improving sound quality and performance.

Ant Group cuts AI training costs with Chinese-made chips

Ant Group says it cut AI training costs by 20% using Alibaba and Huawei chips, challenging Nvidiaโ€™s dominance in the AI chip market.

Related Articles