This session will not be available on Zoom and will not be recorded.  Please join us in the Oak West room in person.

Scientific research has been increasingly dependent on digital data analysis as the volume of data acquired continues to grow. With this explosion of data, new challenges emerge in how to appropriately acquire, curate, analyze, and share this data for research purposes. 

This talk will share open-source DevOps tools from IT best practices spanning realtime data acquisition, trusted timestamping of data and code, containerized pipelines for data analysis, and a framework for accessibly and permanently sharing all outputs of research. Such standardized data organization methods are prerequisites for large-scale computational models. Importantly, these tools can be leveraged to easily comply with FAIR standards and upcoming OSTP data accessibility mandates, and, in turn, improve the reproducibility, rigor, and transparency of scientific research.

 

Paul at UIT Uncoference

 

Audience: All Track(s): AI, Community, Dev / DevOps, General, Research / Academia