RSpace 1.41, Feb. 2017 – Easy Archiving for Your Data
Complete integration with popular data archives & digital ID
RSpace has been designed as a complete bench-to-publication data management solution. An important part of the RDM lifecycle is archiving lab data into suitable public or private repositories.
In version 1.41, we have improved support for exporting your data directly to popular archives like Figshare, Dataverse and DSpace, and also added support for embedding metadata from your unique ORCID digital ID directly into those exports. These additions make it easy to use RSpace to publish and cite raw data, and also archiving lab data to your “lifetime” academic career portfolio. You can even use ORCID links in the RSpace directory to help locate potential collaborators and dig deeper into their publication history. Archiving lab data using RSpace is simple and seamless and can be customized to connect to whatever archive you use, even those not yet on our list. Archiving lab data with RSpace also supports good security and data integrity thanks to our digitally signed (SHA protected) data export feature.
What’s next? Lots of exciting things are in the works! For starters, we have released the first version of our highly anticipated API, but we will say more about that next time….
RSpace 1.40, Jan. 2017 – Tools for Coders
Github and Code Snippets in RSpace
RSpace is designed to be flexible enough to support data management workflows for virtually any researcher. With technology now such an important part of modern research, we wanted to meet the needs of computer scientists and researchers who use code stored in github. That’s why, in version 1.40, we’ve introduced two new code-related features.
Users can now link to your Github repositories from RSpace documents, making it easy to show your downstream audience your work there. See how in this video.
Additionally, for users who prefer to embed code snippets directly in RSpace, we have included a new “insert code snippet” tool in the text editor. To use it, click the new code snippet icon.
In the popup dialog, choose which language this code snippet is written in, paste in your code, and click OK. That’s it, you’re done. A number popular languages are supported including java, Latex, R, SQL, Python and more.
If you are a computer scientists who enjoys contributing to github projects, please consider helping us with our API. Learn more on our github page here.
As always, if you have feedback, questions or suggestions, please click the RSpace help button at the bottom right of the RSpace interface to start a live chat session with an RSpace product specialist!
It looks like this >>
Study says research data is lost at alarming rates
Availability of research data fell by 17% per year after initial publication.
Researchers from the University of British Columbia identified a striking decline in the accessibility of original scientific data over time. The team requested data from authors of more than 500 papers that had been published in the previous 2 to 22 years to determine the percent of data reported as extant at the time of the study. Their disconcerting results lead the team to the pointed conclusion that “research data cannot be reliably preserved by individual researchers” and prompted them to call for changes in the way such valuable data is archived.
“I don’t think anybody expects to easily obtain data from a 50-year-old paper, but to find that almost all the datasets are gone at 20 years was a bit of a surprise.” – Lead author, Tim Vines
Read the original article or the UBC news brief.
Outdated Storage to Blame
Not surprisingly, a primary reason that data was lost is due to outdated storage devices. As the pace of technological change quickens, it is entirely plausible that data will be lost at even faster rates without a major shift in the ways that data is captured and stored. The authors of this study are calling for increasing storage of data on publicly accessible archives, but such a suggestion ignores the very real need to protect intellectual property, especially in fields where technology development is a priority.
A well designed ELN, however, solves both problems, especially if integrated into the institution’s long-term archiving infrastructure. A good quality enterprise ELN solution can store data using industry standard formats like .xml and .pdf that will be supported for decades to come. Additionally the data in a properly implemented ELN is far more likely to be professionally backed-up so that data will not be accidentally lost by lax practices of individuals. If stored on scattered drives and file cabinets in the lab, project data can become fragmented as members transfer from one lab to another.
Intellectual property is ironclad where it’s necessary,
yet scientific collaboration is enabled where it’s beneficial.
An ELN will assure that a centralized copy remains accessible to authorized members within your institution. Controlled access to that data is entirely manageable and can be as public or as private as the authors and project managers want it to be. Moreover, authorship of each item is securely recorded, along with date and time-stamps. Lab supervisors can easily sign-off on the work of staff members and documents can be digitally locked against any future editing. Intellectual property is ironclad where it’s necessary, yet scientific collaboration is enabled where it’s beneficial. Some ELNs, can even pass appropriately formatted data bundles off to long-term archiving systems like DSpace, where it can safely reside under the supervision of the institution’s dedicated archivists.
Lab-Ally will be happy to assist you in securing your data for the future with an electronic lab notebook that meets all of your scientific and data management needs. Contact us for details.
Government Approved
Lab-Ally receives cage code / SAM registration.
Lab-Ally is proud to announce that we are now fully registered with the US Government’s System for Award Management (SAM), including assignment of a CAGE Code through the Defense Logistics Information Service (DLIS). The CAGE Code is a an identifier for contractors doing business with the Federal Government, NATO member nations, and other foreign governments.
With this development, we are better positioned to provide support to research and development institutes or corporations working in concert with US Government Departments of Energy, Agriculture, and Homeland Security Research, as well as National Institutes of Health, the Environmental Protection Agency, and the NOAA. In fact, several of our customers are already engaged in cutting-edge, government-funded research. Government organizations often need to use 21CFR11 compliant document management software like CERF ELN in order to meet strict requirements for data integrity and demonstrable records provenance.
The ELN Revolution
The winds of change are blowing in the electronic laboratory notebook world.
The first ELNs were developed in the 1990’s in the private business sector, as a response to the conflicting needs of an increasingly digital, paper-free workplace and the legal necessity to securely document patentable innovations in a tamper-proof way. The primary adopters of these early ELNs were the “Big Pharma” companies, looking to protect their corporate intellectual property. Thus, the first wave ELNs were Windows-based software packages that would be installed on an individual PC workstation. They were very complex tools, highly specific to a particular domain, that required lengthy training on the part of users and system administrators. Not surprisingly, these systems were extremely expensive, which placed them well outside the reach of any other potential customers. By the early 2000’s, the ELN landscape shifted; academic researchers recognized the value of a quick, computer-based method to capture the day-to-day data generated in their laboratories that also provide a way to engage in scientific collaboration across the globe. Without the resources to afford the existing ELNs used by the pharmaceutical industry, individual academics began using and modifying widely available generic tools, such as Dropbox and Evernote. These were fairly flexible, easy to use, and just about as cheap as can be. Yet, this cheapness came at the cost of security. A more secure method was needed to capture the scientific records that could also reliably track the various versions of a document over the course of its history. To fulfill this need, a second wave of ELNs arose, aimed at deployment within a single laboratory under the supervision of a Primary Investigator, who could determine the amount of sharing permissible within and among groups. Examples of such ELNs include eCat, Lab Archives, and Ruro. They were designed to be more generic and less domain specific, allowing cross-disciplinary collaborations and the potential for completely novel innovations. Taking a tip from the self-adapted internet tools, the second generation ELNs began to be web-based, freeing academics from being bound to specific computer platform. These ELNs made improvements in usability as well. However, they were limited in scale, being suited best to single laboratories.
Enter the 2010’s.
An ELN revolution appears to be brewing on the horizon. Beginning in 2011 and 2012, a handful of large universities have begun to seek out affordable data management systems that could be deployed across their entire institution, allowing inter- and intra-group collaboration, version tracking, security, and ease of use. These newer solutions must be platform independent, support data publishing, accommodate data in a variety of formats, and must not only allow archiving but must retain sufficient metadata to allow searchable retrieval. ELNs will most certainly need to be fully mobile to take advantage of tablets and other mobile devices. Above all, these systems must be scalable to thousands of users. Pilot trials of such enterprise ELN solutions, such as RSpace, have taken place from 2012 to 2013. The ELN revolution is certainly beginning, as more institutions recognize that digital data capture is rendering traditional recording methods more and more inefficient. At the present, a few major U.S. institutions are poised to enact the largest deployment yet seen in academia, with others sure to follow soon.