I did look at turbodbc but there didn't seem to be much info on how well executemanycolumns with a dataframe worked.
-
-
Thank you! Preliminary results: to_csv + COPY: approx 15sec/million rows, binary+copy: approx 240sec/million :-( The copy itself is super super fast - 0.3 seconds for 100k rows, but now almost 99% of time is spent within the .store()
-
schema looks like this: Schema("test", [ id_("year"), cat("location_level"), num("export_value"), num("import_value"), num("export_rca"), num("cog"), num("distance"), id_("product_id"), cat("product_level"), id_("location_id") ])
- 6 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.