Fabric AI Tooling

Decided to abandon the AI when writing this blog post and all blog posts going forward. There’s just something about a consistent writing habit that makes a difference in everyday life. Over the last two weeks I have been playing around with Fabric AI(https://github.com/danielmiessler/Fabric) by Daniel Miessler. It came to my attention when I discovered a video by networkchuck and was like I have to program with this tool. Here are some of my findings and decisions I have made for a few different projects I have done using this tool. The first is analyzing my daily tasks list or what I have accomplished for the day and the second is grabbing Youtube videos and ranking them in the most interesting and relevant for my interest. 

I am not going to go over how to install Fabric on this partial blog post because the Readme has far more detail and is very clear on Github. There are also a few different creators on Youtube that show how to install and use this Application. Instead I’ll go over some of the decisions I made when using the tool and how I got it to do what I like in a few iterations. There are definitely more iterations to follow to optimize the python programs I wrote as well as some additional projects I’d like to create. For instance, using it for my subscription to Global Security as a basis to tell me the most relevant public and current military information(text, audio, code, images, and other types of media) , building it into a vector db and having a model retrieve information based on prompts.  I read about an AI tool that Lockheed Martin is developing and think it would be fun to build a baby version of that tool for personal consumption. That’s an aside and my main focus for this project was just getting a good handle of the tool.

One of the first choices I had to make when using fabric was what model to use to analyze the journal. I tried a few out but the one I decided to go with was a local model because I did not want to feed an online model my journal entries or daily task accomplishments. I ended up going with mistral-nemo because it is highly optimized for language-first reasoning. It handles big text and multiple documents easily and has a lower VRAM footprint. 


What I do is pretty much write out a list of daily tasks that I accomplished like 2hours spent studying for cisco exam. 1h30min spent running, etc. I then feed this into the different analyzing patterns like find blind spots or find negative thinking. There are a few custom patterns like suggesting a schedule for tomorrow or break it down into smaller summaries and store it in a long-term text file(memory for the AI) . I also use a few uncensored models to really provide stronger criticism that won’t hold any punch when critiquing my daily tasks. 

I played around with different ways to store a large amount of memory or history about the past days so that it will give me feedback based on past days and noticed tread. I tried a few different ideas from loading the full text of the past days which you start to run into issues of too large context and context rot. Condensing the days into a smaller long term summary using a model fabric custom pattern to compress or chunk it so the prompt that is using was fewer tokens. The last thing I tried was to load the data into qdrant which is a vector database and structure the data in a way where retrievals were easy with filter-based calls. The vector database still needs a little more time to perfect but it works. I just think the better the data organization the more efficient the program gets. So I’ll definitely be circling around it soon before I really tackle creating a Global security advisor. 

I also spent some time experimenting with the youtube features of fabric. I just worked on a simple program that takes a list of my top creators and scrapes their video pages for a list of most up to date videos and then has fabric process all of these and extract the wisdom from them and feed the wisdom to a local AI that gives it a sentiment score based on keywords. The local AI then takes the keywords and looks for similar words in the video and then creates a sentiment score relevant to the top 50 videos that are written to a file with the wisdom in a ranking. I need to work on biases so that longer transcripts are not preferred over shorter videos or transcripts. Also the bias regarding self promotion where the AI is more likely to pick videos that references itself and give them higher sentiment scores(openai models).  Then I will go and watch the videos I find the most interesting with the highest sentiment score. This is an idea that came directly from the creator of fabric for how he intended the product to be used. I think some improvements that can be made are dynamically grabbing creators or having each run add a few relevant creators. AI gives priority to most current youtube videos. This will have a few more iterations as well. Right now it’s in neonatal form but it will soon to grow into an infant.   


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *