With LLM-based functions, EVA will support more interesting queries like this:
SELECT ChatGPT(TextSummarizer(SpeechRecognizer(audio)),
"Is this video related to the Russia-Ukraine war?")
FROM VIDEO_CLIPS;
Here, EVA sends the audio of each video clip to a speech recognition model on Hugging Face. It then sends the recognized text to a text summarizer model. EVA executes both models on local GPUs. Lastly, EVA sends the text summary to ChatGPT as a part of the prompt. The ChatGPT UDF is executed remotely.
The critical feature of EVA is that the query optimizer factors the dollar cost of running models for a given AI task (like a question-answering LLM). It picks the appropriate model pipeline with the lowest price that satisfies the user's accuracy requirement.
Got it! EVA is designed for the local use case. You can define a Python function that wraps around the LLM model and use it anywhere in the query (we refer to such functions as user-defined functions or UDFs).