Elsevier, publisher of science journals and clinical decision-support tools, is rolling out an AI-powered research tool it says is built to transform how researchers work.
LeapSpace is an AI-assisted workspace for researchers to carry out multiple tasks in one secure environment. Users can use it to generate ideas, plan projects, explore literature, find collaborators and identify funding opportunities. LeapSpace combines agentic AI, generative AI, reasoning engines and retrieval-augmented generation to support all kinds of workflows. The AI can analyze abstracts and full text to offer structured, referenced answers.
The tool and its underlying data are based on existing Elsevier products, including ScienceDirect AI and Scopus AI. These include the world’s largest collection of research abstracts, Elsevier claims and millions of peer-reviewed full-text articles and books from Elsevier and other scientific publishers. LeapSpace effectively consolidates them in a digestible chatbot interface.
“We're able to pull from these two very different perspectives together for queries to get a much more comprehensive response,” Adrian Raudaschl, principal product manager at Elsevier, told Fierce Healthcare. LeapSpace’s risk of serious hallucination is less than 1%, he said, adding that “no AI tool is completely free from the risk of hallucination or misinterpretation.”
LeapSpace is available today for institutions to preview. Existing and new ScienceDirect AI customers will be automatically upgraded to LeapSpace when the full commercial rollout begins in Q1 2026.
The product was designed with thousands of researchers across 64 countries. Early users report that it saves significant time, improves research design, uncovers missed insights and deepens analysis, the company claims. None of the queries in LeapSpace will be used to improve or train any AI models.
Over the years of development, Elsevier has pinpointed three primary concerns among the researcher community when it comes to AI: having a constant need for more applicable information, the possibility of critical thinking being diminished by AI and trust.
“People have inherent and rightful distrust of AI solutions,” Raudaschl said.
“What we have heard from customers across academia and industry is that privacy, security, trusted data and responsible use of AI are key to addressing their needs in ways that they trust,” echoed Max Khan, SVP of academic and government solutions at Elsevier, in emailed comments.
To that end, every AI-generated insight includes a Trust Card, a feature that shows sources, explains why a source was cited and surfaces contradictions. It is intended to give researchers the confidence to make informed decisions. This approach supports critical thinking, Elsevier argues, helping researchers understand the strength of given evidence.
Another LeapSpace feature is Deep Research, helpful for more free exploration or meta analyses of ideas. The original version designed by Elsevier was citation-heavy and written in a highly convincing way, but testing revealed “that actually, that made researchers distrust it, rather than trust it,” Raudaschl explained. Elsevier updated the tool to better encourage critical thinking. A synthesis section now explains the AI’s thinking and how it got to a conclusion.
Users can upload their own documents into LeapSpace for analysis, with the option to combine them with Elsevier’s other libraries of data. Another feature, Claim Radar, lets users explore what the consensus is on a given claim from different perspectives. The feature surfaces how much evidence supports or contradicts a claim.
The funding discovery component is based on 45,000 active and recurring grants worth over $100 billion. This data is drawn from Elsevier’s Funding Institutional database, which is updated daily. LeapSpace also features an author search, which helps researchers identify possible collaborators. No strong author database exists today that is based on natural language rather than keyword search, which is a lot more restrictive, per Raudaschl.
LeapSpace is launching at a time of much buzz around AI’s potential for research and the life sciences industry. Recent data from Elsevier found distinct regional differences in perceptions of AI among researchers around the globe. The Researcher of the Future survey polled more than 3,200 active researchers in 113 countries. It found a significantly higher portion of researchers now use AI tools in their work (58%) compared to 2024 (37%).
North America was the region with the most negative perceptions of AI tools across a number of measures compared to other regions. The measures included time-savings, choice, value, usefulness, autonomy, ethics and trust. The regions that scored highest were China and Japan. Elsevier cannot definitively say what is the main driver for regional differences in sentiment toward AI.
Globally, 61% of researchers agree AI will drive new knowledge in the next two to three years, the survey found. But only a third agree there is good governance of AI at their institution and even fewer, 27%, agree they have received sufficient training in the use of AI for research.
The top current uses for AI in North America included finding and summarizing the latest research (13%) and performing literature reviews (14%). That’s compared to the much higher 31% and 20%, respectively, in the Asia Pacific region.
Elsevier anticipates LeapSpace will be particularly useful to researchers working in interdisciplinary fields or across research disciplines.
“Our goal with LeapSpace over time is to address high-value areas of the end-end researcher workflow by combining trusted content with the responsible use of AI,” Khan said. He added that the goal is to help researchers “move from idea to impact faster” by supporting crucial workflow steps like ideation and research planning.
An independent advisory board will oversee LeapSpace’s transparency, ensuring its algorithms remain explainable and publisher-neutral. Every LeapSpace feature reflects Elsevier’s Responsible AI Principles, which emphasize transparency, explainability and human oversight. Users have full visibility into how results are generated.