Tag Archives: RAG
We had the pleasure on Friday to take individual calls with everybody who submitted to the ideation phase of the MariaDB AI RAG hackathon.
The ideation phase deadline passed last week, and we are happy to share that we received several promising submissions for both the innovation track and the integration track. Innovation involves applications using MariaDB Vector, like RAG, and integration being enabling MariaDB Vector in an existing framework.
Participants range from individual contributors to even a corporate team. Some already have some experience with AI, and some are newcomers to RAG.
…
The ideation phase of the MariaDB AI RAG Hackathon is nearing its deadline on Monday (by end of March).
We have several cool submissions so far. One is about combining the Knowledge Graph and LLMs, using MariaDB Vector Nearest Neighbour Search. Another one is about an “advanced context diff”, identifying the differences between two text corpuses based not on their literal wording, but on their content.
All of the current submissions are in the Innovation track. We would particularly like submissions in the Integration track – to add MariaDB to frameworks such as these, or other apps.
…
One week left to join the AI RAG Hackathon with MariaDB Vector and Python!
Winners get to demo at the Helsinki Python meetup in May, receive merit and publicity from MariaDB Foundation and Open Ocean Capital, and prizes from Finnish verkkokauppa.com.
To participate, gather a team (1-5 people) and submit an idea by the end of March for one of the two tracks. You then have until May 5th to develop the idea before the meetup 27th May.
- Integration track: Enable MariaDB Vector in an existing open source project or AI-framework.
…
The day has come that you have been waiting for since the ChatGPT hype began: You can now build creative AI apps using your own data in MariaDB Server! By creating embeddings of your own data and storing them in your own MariaDB Server, you can develop RAG solutions where LLMs can efficiently execute prompts based on your own specific data as context.
Why RAG?
Retrieval-Augmented Generation (RAG) creates more accurate, fact-based GenAI answers based on data of your own choice, such as your own manuals, articles or other text corpses. RAG answers are more accurate and fact-based than general Large Language Models (LLM) without having to train or fine-tune a model.
…
Continue reading “Try RAG with MariaDB Vector on your own MariaDB data!”