Events

Meet the team and learn about our ML monitoring methodologies
and best practices.

Beyond fine-tuning: Approaches in LLM optimization

Join our next webinar on approaches in LLM optimization, and together, let's dive into techniques, methodologies, and best practice...
Unraveling prompt engineering

Unraveling prompt engineering

Join our next webinar on Unraveling Prompt Engineering! Let's take the "art" of prompt engineering and unravel it into best practices and methodologies.

Navigating the future: Emerging architectures for LLM applications

What are the emerging architectures for LLM applications? From data preprocessing and embedding to strategies for prompt construction and retrieval and hosting solutions for LLMs, here's what you need to know!

To train or not to train your LLM

Under what circumstances does training your own LLM lead to more stable outputs? Here are 3 reasons why training an LLM is advantageous and 3 reasons why relying on out-of-the-box solutions may be preferable. 

Meet Elemeta: Metafeature extraction for unstructured data

LLMs are everywhere, left, right, and center of any and all AI discourse these days. But we've got to be honest here, it's hard to understand how they make decisions and explain and monitor them. So earlier this week, we released into beta Elemeta, our open-source library for exploring, monitoring, and extracting features from unstructured data.
Improving search relevance with ML monitoring

Improving search relevance with ML monitoring

Let's take a dive into ML systems for ranking and search relevance on architectures such as Elasticsearch and vector databases like Pinecone and what it means to monitor them for quality, edge cases, and corrupt data.