AI Automation Tool | 2023
Client: PD4 Solutions, LLC
Skills Used: python, OpenAI API, LangChain, ChromaDB, AWS Lambda, AWS S3, Github
The Problem
Systematic literature reviews provide rigorous and comprehensive analysis of existing research on a specific topic, helping to summarize the current landscape of the literature, identify any gaps, and inform decision-making on the topic. However, these literature reviews can very time-consuming and costly, taking approximately 3-6 months to complete and costing ~$150K per review. Taking into account that millions of new articles are published annually, the burden of systematic literature will only become more significant over time.
ScholarlySync.ai
The Solution
ScholarySync.AI
ScholarlySync.ai streamlines the systematic literature review process by automating the inclusion/exclusion process through title and abstract screening. This automation is accomplished by pulling in published or unpublished papers from any database, allowing users to define inclusion or exclusion criteria for which papers can be included in their review, and using an artificial intelligence algorithm to determine inclusion/exclusion based on the user-defined criteria.
​
I was in charge of making the AI screening algorithm and the scoring algorithm. The comprehensive AI algorithm is context-aware and can interpret keywords as a human would, making fast and efficient screening of thousands of articles feasible. The algorithm cites its reasoning for the inclusion/exclusion for each paper, demystifying the decisions it makes.
The algorithm has two components: an NLP component and a historical component.
-
The NLP component ranks the articles using reasoning from a Large Language Model (LLM)
-
The Historical component ranks the articles on your reasoning from previous reviews
​
How it works
To use the ScholarlySync, the user first names the review, selects a review type, and adds a description of the review topic. This information will be used in the AI algorithm to understand the context of the review.
After setting up the review, the user then is asked to filter what literature they want to include in their review. These filters include the language the article was written in, the year range that it was published, the type of article, and the database source of the article.
Finally, the user is asked to add inclusion and exclusion criteria for the systematic literature review. The algorithm uses these criteria to generate an inclusion score for each article. The scoring system ranges from 0-10 with 10 corresponding to 'Yes' and 0 to 'No' and all numbers between to 'Maybe'.
Once all user-generated information is submitted, ScholarlySync.ai generates a ranker dashboard that shows the automated scoring of the articles based on their relevance to inclusion/exclusion criteria.
Each article has a AI scoring panel that gives the NLP score, the Historical score and the overall Composite score. Users are given the option to accept the software's decision or manually screen the article as 'Yes', 'No', or 'Maybe', which helps the historical algorithm learn about their screening tendencies.
In the scoring panel, ScholarlySync.ai also offers reasoning for the scoring. This reasoning includes the sentences that matched each of the criteria, offering transparent understanding of the AI's decision-making process.
The Feedback
“Myles was tasked with creating an integral part of our solution and did a fantastic job. He is easy to work with and is able to quickly incorporate new technology into the solution."