Thoughts on this please.

A application who’s total working total over head is 2.246 gb . The application is modular if need be. The application allows interface with a large language models giving the large language model read and write authority to a Data storage system. When the llm creates a new entry it also creates a Meta data tag about the contents of the write preformed. I currently have 949.2167 tera byte’s compressed to a 908 gb file . The compression decompression is lossless. I also have a working version of this for mobile devices. But with a smaller data capacity but still huge compared to. The llm uses the meta data to navigate the data set. Application currently was is generative focused desk top employee task automation. I explained the task then I preformed the task let the llm create a “memory” of the task store it and gave it the ability to determine what data from past experiences it has gained to achieve this task. And this works flawlessly. Next steps?

submitted by /u/Ok_Nobody_9659
[link] [comments]

Leave a Reply

The Future Is A.I. !
To top