Open-Source Slack AI – How & Why I Built v1.0 in 3 Days

How I made a Slack App MVP to summarize threads and provide channel overviews using open-source tools and OpenAI.

The official Slack AI product looks great, but with limited access and add-on pricing, I decided to open-source the version I built.

If you’re anything like me, you’ve been carefully following the rapid evolution of AI technology, particularly LLMs. As a product leader, it’s essential to keep your finger on the pulse, but when things are moving so fast, it’s not easy to stay up-to-date without giving up all your nights and weekends.

At Tatari, we recently decided to have a hackathon to get our hands dirty with these tools, and I decided to open the code editor and get my hands on the keys for a change of pace. We’ve been building and shipping ML features for a few years, but with all the new tools available, there’s plenty more opportunity at our doorstep.

Open-Source Slack AI

If you’re here because you’re interested in using the app rather than the lessons learned and the process of building it, here’s what it does and how to get it up and running for yourself!

If you get any value or like the idea, please take a moment and star the repo!

This repository is a ready-to-run basic Slack AI solution you can host yourself and unlock the ability to summarize threads and channels on demand using OpenAI (support for alternative and open-source LLMs will be added if there’s demand). The official Slack AI product looks great but has limited access and add-on pricing.

Once up and running (instructions for the whole process are provided below), all your Slack users will be able to generate to both public and private:

  1. Thread summaries – Generate a detailed summary of any Slack thread (powered by GPT-3.5-Turbo)
  2. Channel overviews – Generate an outline of the channel’s purpose based on the extended message history (powered by an ensemble of NLP models and a little GPT-4 to explain the analysis in natural language)
  3. Channel summaries (beta) – Generate a detailed summary of a channel’s extended history (powered by GPT-3.5-Turbo). Note: this can get very long!

The Problem Worth Solving – Slack AI

As a product leader, I spend a lot of time on Slack (and in meetings), so that’s where I focused my attention. Scratching your own itch is one of the easiest ways to identify a problem worth solving. Plus, in a hackathon, having a heavily constrained UI helps save a huge portion of the work! It also meant I didn’t need to think about our existing codebase and could work entirely in a green field.

After some brainstorming, I landed on the idea:

A product to streamline the catching-up process when you’re returning from PTO

I’m sure you can think of plenty of times when you or a colleague has spent a day or more “catching up” after being out of the office. Seems like a problem worthy of solving, right?

Once I started to break down the problem into jobs to be done (JTBD) – and more solvable pieces – I realized the key jobs weren’t limited to returning from PTO.

Needing to catch up on an asynchronous conversation happens regularly, not just when you’ve been on vacation. It might be a hackathon, but this is a great reaffirmation that this problem is truly worth solving — all else being equal, a problem that occurs more frequently is more worthy of a solution.

The Jobs To Be Done

After a bunch of white-boarding, I landed on two primary JTBD:

  1. When I need to understand a long thread, I want to do so without missing anything essential and without wasting time on extra detail so that I can quickly add value.
  2. When joining a Slack channel for the first time, I want to understand the purpose and history of the channel so that I can quickly add value and be sure I’m in the right place.

These job stories were the two features I set out to build.

The Slack AI Solution Hunch

It was time to capture my **solution hunch — i.e., what I imagined to be a viable solution based on my existing customer and domain knowledge**. Being a hackathon, I primarily drew from my personal experience working in Slack every workday since 2011. In regular product life, I’d talk to customers here and throughout the process, but in a hackathon, blindly scratching your own itch is both totally okay and often necessary!

**The Hunch:** I’d take the message history, pass it to an LLM with an appropriate prompt, and return that to the user. For threads, it would be all the messages in the thread, and for channels, it would be (some or all) of the channel’s message history. This turned out to be an oversimplification. More on that later.

The Riskiest Assumptions

It’s time to identify my riskiest assumptions and get to work validating/mitigating them. One of the central assumptions to making this work is whether OpenAI can provide a *helpful* summary given the message history as context. With the amount I was already using ChatGPT and the experimentation I’d done with various LLMs, I felt comfortable pushing this later in the process.

Instead, my riskiest assumption was that I could build a functional Slack app in the available time… A small side effect of choosing to be a team of one!

When approaching a project, whether a hackathon, feature, or a whole new startup, it’s important to start with your riskiest assumptions and go from there. Remember that dependencies will force your hand in certain cases, and that’s fine. The bigger the scope, the more helpful and necessary it becomes to get things out of your head. Post-its or a notebook are my personal preference when I don’t need to collaborate with others. When I do, it’s hard to beat a Figma Figjam. In a regular product development context, the riskiest assumption of this project might have been that we could develop a compelling enough summary to motivate people to bother with a Slack App (i.e., a Desirability Risk).

Your riskiest assumption will be highly contextual. For example, next time I want to build something like this, building the Slack app won’t be a risky assumption.

How I Approached My Riskiest Assumption (Anecdote)

For this stage, I’m taking off my PM hat and putting on my developer hat. If you’re here for the Product Management lessons, jump to the next section.

Having had great success with side projects using a tutorial as the foundation for building entire features or products, I searched for a relevant tutorial and found one on Medium.

Part of my goal with this project was to try out some new tech by building something with Large Language Models. I’ve also wanted to test FastAPI (a Python library for building APIs with a reputation for being easy to work with and building things quickly).

The tutorial I found used Flask, a widely adopted library for building Python APIs. I’d used it on past projects and honestly didn’t like it very much. The tutorial covered exactly what I needed but used the ‘wrong’ framework. Thankfully, ChatGPT’s **GPT-4 helped me translate it with around 20 minutes of back and forth**.

Protip: trust the chatbot more than you might be inclined to; it’s more capable of fixing its mistakes or slightly dated information than you may think.

It turns out that picking FastAPI instead of Flask would save me from having to move to FastAPI halfway through the project! The time required to process large channels and threads meant the standard Slack app integration would time out. The solution was to use websockets instead of REST, which is apparently way easier in FastAPI than Flask. It was a few lines of changes/additions to my existing code and a configuration change in the Slack App settings. If I’d used Flask, the best fix might have been to switch to FastAPI…

Once I installed the Slack app, I set up a working command for summarizing threads and channels, and the app successfully responded with, “Insert awesome summary here!” I marked this risk as mitigated.

Building software with a steel thread

A concept I’ve adopted in my department over the last few years has been the “steel thread” approach to building software coined by Jade Rubick. The basic premise is that software is best by building the simplest possible end-to-end experience and then layering complexity and capabilities on top of that. This contrasts the tempting process of starting at A and building piece by piece to Z – an approach that’s necessary in the physical world of building bridges and skyscrapers but unnecessarily risky when building software (and even in the physical world, building linearly from A to Z is becoming an outdated approach).

We’ve found the steel thread approach particularly important in complex AI and ML features where it’s easy to get tripped up late in the process by choices you made early on. It’s also a great way to set yourself up for continuous delivery, front-loading delivery, and being able to test actual prototypes with users, proxy users, and stakeholders.

By way of example, a steel thread for this project was a slack app with a command that would take the chat history, pass the first 500 words to OpenAI with the prompt “summarize this,” and then give the response back to the user. I already had a functional proof of concept with these basics in place. In regular AI/ML product work, we would need to include DevOps/MLOps since that’s such a common sticking point, but this hackathon would be staying on my laptop so that I could skip this step.

Addressing UX in a rigid & existing UI


I’ve been working on this article & the open-source Slack AI project for a WHILE and decided I needed to publish what I have. If you want the full story, join my product leadership newsletter to get notified when this article is finalized (or check back here). Thanks for reading!

Stay tuned – the rest of the story is coming soon!