I believe that it was @Tobbe who approached this concept first - as an example on how to use RedwoodJS RSC to invoke AI services from Welcome | Fixie Developer Portal. As I am flirting with AI (in the context of using is as a tool helping software developers) I found @Tobbe’s approach fascinating
Then I became aware of the completely different approach to achieve the same goal, initiated by @keith.t.elliot who encouraged @jace @PantheRedEye and myself to create a Fixie Developer Portal based RedwoodJS Sidekick. Reacting enthusiastically to @keith.t.elliott idea, I waited to join this (experimental) project for too long, so, Keith & co. created the very first prototype (currently existing as PR) - https://deploy-preview-130--redwoodjs-com.netlify.app/sidekick which renders the following page:
In addition to the above mentioned RedwoodJS team, several Fixie team members actively participated in building this prototype.
This is not an official RedwoodJS project and the RW core team members that created this protype did that work in their free time (after their day job and after work on their RW official projects). As @keith.t.elliott , @Tobbe @jacebenson and @PantheRedEye need a lot of help to bring this prototype to the product level, and because I believe that it could became a critical component of the RW development.
People reading this might ask why is this article written, so let me answer this question. The purpose is to make it known that there is an ongoing “unofficial” project project where a few core team members are trying to present RedwoodJS community the RedwoodJS online documentation using RAG:
- an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides the data. Adding an information retrieval system gives you control over the data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain natural language processing to your enterprise content sourced from vectorized documents, images, audio, and video.
@keith.t.elliott who leads this project may eventually join this discussion (monologue at the moment) to describe his ultimate vision. His availability is rather scarce, so I plan to keep adding to this article describing my own vision.
Hey, I am not sure what all is known but after meeting in person Keith, and I discussed working on something together. He’s been deep into some AI stuff all summer and I have been too. This Fixie lab done at the conference and their generous offer to host the embeddings and agent to add some AI supported docs seemed like a new way to contribute.
There’s two PR’s running for this, one to add a link from the docs. The other used to deploy the agent and collection.
It loks like that initial text might be becasue a setting on the agent. On fixie’s dashboard you can toggle to have the agent send a greeting or not, I’m guessign the greeting has be in part of the prompt or something to that effect. It’s a good call out.
When I load up that deployed preview, it loads a more on pointe message.
This more on point message is somewhat random as I am pretty sure that we run the same instance of the sidekick app.
Similarly, running this app twice (using the same prompt (Getting started with prompts for text-based Generative AI tools | Harvard University Information Technology) you would likely get different reponse, consquence of the LLM getting smarter (also a consequence of RAG)