Sprint reports
Sprint 1:
Styled table displays have been implemented
A loader animation now appears while waiting for AI responses
Error messages from the AI are displayed if they occur
The functionality for uploading multiple files has been added
Mandatory field verification (marked with a red asterisk) is required when submitting requests to the AI
Explorers have the option to submit their own API keys for AI integration
File upload functionality to the backend has been introduced, with integration with a front pending
Only the Request field is mandatory Default Temperature value is set to 0.7
Increased the request time to AI to 120 seconds
Resolved the error: invalid Unicode output
Sprint 2:
Implemented processing CSV and TXT files specifically for Opus.
Modified the process for entering AI API keys: initially through an Edit / Save workflow, then transitioned to standard input while obscuring characters during entry and making the input mandatory.
Separated the input field for Temperature settings, providing individual control for Opus and GPT.
Adjusted the default Temperature value to 0.7.
Introduced a button to clear the selected file from the file input after choosing it.
Applied various cosmetic adjustments to the form, such as modifying message texts and rearranging block layouts.
Successfully deployed and configured FreshRSS.
Implementing a feature to directly output data from the RSS database into a table format on dataexplorers.ai. Initially, this will bypass the need to convert data to CSV format in the beginning. This will allow editors to avoid relying on third-party applications like spreadsheets
Sprint 3:
Migrated FreshRSS from SQLite to MySQL for improved performance and scalability.
Configured external access for the FreshRSS MySQL database
Add MySQL authorization method for Exporter
Grant the data_explorers_user permission to create databases and modify table structures.
Set up the old version of FreshRSS on the rss-old subdomain for backward compatibility and reference.
Table constructor. It provides a straightforward approach to handling structured data. Also, it supports multi-user mode, allowing for flexible read and write operations.
Sprint 4:
Implemented filters to the Sources feed, making it easier for explorers to find specific articles. Here's how it works:
In the API request for retrieving data for the Sources feed, I'll add filtering based on Author/Tags/Category fields.
On the back end, I'll filter the results using a database query and then pass the filtered data to the front end for display on dataexplorers.ai
Implementing a connection between the Sources block and the AI request form (This feature will allow explorers to select articles directly from the Sources feed and seamlessly integrate them into the AI request process)
Sprint 5:
Implemented the ability to select an AI models
Restored ChatGPT functionality and adapted it to new modules
Improved prompt formation when making a request to the AI
Increased the timeout for executing a request to GPT
Added entities (tabs) to the table constructor
Sprint 6:
Filter data by title and sources, with additional ranking by date
Add the ability to save prompt/table constructor parameters
Add a trigger to periodically update data. Replace data in the database
Auto-update sources/categories by created data groups in rss.dataexplorers.ai
Combine data from multiple users' MySQL databases and display it on a web interface
Configure text formatting in the 'Content' section, 'Sources feed' block
Sprint 7 (in progress):
Implemented CSV export feature for tabular AI responses