In this episode I speak about data transformation frameworks available for the data scientist who writes Python code.
The usual suspect is clearly Pandas, as the most widely used library and de-facto standard. However when data volumes increase and distributed algorithms are in place (according to a map-reduce paradigm of computation), Pandas no longer performs as expected. Other frameworks play a role in such context.
In this episode I explain the frameworks that are the best equivalent to Pandas in bigdata contexts.
Don’t forget to join our Discord channel and comment previous episodes or propose new ones.
This episode is supported by Amethix Technologies
Amethix works to create and maximize the impact of the world’s leading corporations, startups, and nonprofits, so they can create a better future for everyone they serve. Amethix is a consulting firm focused on data science, machine learning, and artificial intelligence.
References
- Pandas a fast, powerful, flexible and easy to use open source data analysis and manipulation tool – https://pandas.pydata.org/
- Modin – Scale your pandas workflows by changing one line of code – https://github.com/modin-project/modin
- Dask advanced parallelism for analytics https://dask.org/
- Ray is a fast and simple framework for building and running distributed applications https://github.com/ray-project/ray
- RAPIDS – GPU data science https://rapids.ai/