How to return the maximum decimal “precision” and “scale” from pandas dataframe column?

December 18, 2021 Off By Coves1947

I’m trying to create a function the reads dataframe float 64 columns and returns the maximum precision and scale as if it were a SQL data type. For example, let’s say I have a column, “Earnings” with values of:

Earnings
100.01
100.011423
100.02
100.02231492
100.0313
100.044

In this example, the maximum precision would be 11, as the value with the most amount of numbers, 100.02231492, contains 11 total numbers. The maximum scale would also be 8, as that same number has the highest number of decimal places (8). The function would ideally be able to be applied to a list of float64 columns and return the maximum precision and scale for each column.

I have tried something akin to:

floats=staking_df.select_dtypes(include=[float])
floats=floats.astype(str).apply(lambda x: x.str.split('.'))

Which would return me

Earnings
[100],[01]
[100],[011423]
[100],[02]
[100],[02231492]
[100],[0313]
[100],[044]

Ultimately, the function for this column would return a tuple of (11, 8). I am unsure of how to proceed In respect to application to multiple columns. I can’t help but think this operation as written thus far is inefficient as well. Is there a better way of approaching this?