Skip to content
This repository was archived by the owner on Feb 2, 2024. It is now read-only.

Commit 2559bb8

Browse files
densmirnakharche1e-toRubtsowaPokhodenkoSA
authored
Merge refactored DataFrame to master (#870)
* Turn on Azure CI for branch (#822) * Redesign DataFrame structure (#817) * Merge master (#840) * Df.at impl (#738) * Series.add / Series.lt with fill_value (#655) * Impl Series.skew() (#813) * Run tests in separate processes (#833) * Run tests in separate processes * Take tests list from sdc/tests/__init__.py * change README (#818) * change README * change README for doc * add refs * change ref * change ref * change ref * change readme * Improve boxing (#832) * Specify sdc version from channel for examples testing (#837) * Specify sdc version from channel for examples testing It occurs that conda resolver can take Intel SDC package not from first channel where it is found. Specify particular SDC version to avoid this in examples for now. Also print info for environment creation and package installing * Fix incerrectly used f-string * Fix log_info call * Numba 0.49.0 all (#824) * Fix run tests Remove import of _getitem_array1d * expectedFailure * expectedFailure-2 * expectedFailure-3 * Conda recipe numba==0.49 * expectedFailure-4 * Refactor imports from Numba * Unskip tests * Fix using of numpy_support.from_dtype() * Unskip tests * Fix DataFrame tests with rewrite IR without Del statements * Unskip tests * Fix corr_overload with type inference error for none < 1 * Fix hpat_pandas_series_cov with type inference error for none < 2 * Unskip tests * Unskip tests * Fixed iternext_series_array with using _getitem_array1d. _getitem_array1d is replaced with _getitem_array_single_int in Numba 0.49. * Unskip tests * Unskip old test * Fix Series.at * Unskip tests * Add decrefs in boxing (#836) * Adding extension type for pd.RangeIndex (#820) * Adding extension type for pd.RangeIndex This commit adds Numba extension types for pandas.RangeIndex class, allowing creation of pd.RangeIndex objects and passing and returning them to/from nopython functions. * Applying review comments * Fix for PR 831 (#839) * Update pyarrow version to 0.17.0 Update recipe, code and docs. * Disable intel channel * Disable intel channel for testing * Fix remarks Co-authored-by: Vyacheslav Smirnov <vyacheslav.s.smirnov@intel.com> * Update to Numba 0.49.1 (#838) * Update to Numba 0.49.1 * Fix requirements.txt * Add travis Co-authored-by: Elena Totmenina <totmeninal@mail.ru> Co-authored-by: Rubtsowa <36762665+Rubtsowa@users.noreply.github.com> Co-authored-by: Sergey Pokhodenko <sergey.pokhodenko@intel.com> Co-authored-by: Vyacheslav-Smirnov <51660067+Vyacheslav-Smirnov@users.noreply.github.com> Co-authored-by: Alexey Kozlov <52973316+kozlov-alexey@users.noreply.github.com> Co-authored-by: Vyacheslav Smirnov <vyacheslav.s.smirnov@intel.com> * Re-implement df.getitem based on new structure (#845) * Re-implement df.getitem based on new structure * Re-implemented remaining getitem overloads, add tests * Re-implement df.values based on new structure (#846) * Re-implement df.pct_change based on new structure (#847) * Re-implement df.drop based on new structure (#848) * Re-implement df.append based on new structure (#857) * Re-implement df.reset_index based on new structure (#849) * Re-implement df._set_column based on new strcture (#850) * Re-implement df.rolling methods based on new structure (#852) * Re-implement df.index based on new structure (#853) * Re-implement df.copy based on new structure (#854) * Re-implement df.isna based on new structure (#856) * Re-implement df.at/iat/loc/iloc based on new structure (#858) * Re-implement df.head based on new structure (#855) * Re-implement df.head based on new structure * Simplify codegen docstring * Re-implement df.groupby methods based on new structure (#859) * Re-implement dataframe boxing based on new structure (#861) * Re-implement DataFrame unboxing (#860) * Boxing draft Merge branch 'master' of https://github.com/IntelPython/sdc into merge_master # Conflicts: # sdc/hiframes/pd_dataframe_ext.py # sdc/tests/test_dataframe.py * Implement unboxing in new structure * Improve variable names + add error handling * Return error status * Move getting list size to if_ok block * Unskipped unexpected success tests * Unskipped unexpected success tests in GroupBy * Remove decorators * Change to incref False * Skip tests failed due to unimplemented df structure * Bug in rolling * Fix rolling (#865) * Undecorate tests on reading CSV (#866) * Re-implement df structure: enable rolling tests that pass (#867) * Re-implement df structure: refactor len (#868) * Re-implement df structure: refactor len * Undecorated all the remaining methods Co-authored-by: Denis <denis.smirnov@intel.com> * Merge master to feature/dataframe_model_refactoring (#869) * Enable CI on master Co-authored-by: Angelina Kharchevnikova <angelina.kharchevnikova@intel.com> Co-authored-by: Elena Totmenina <totmeninal@mail.ru> Co-authored-by: Rubtsowa <36762665+Rubtsowa@users.noreply.github.com> Co-authored-by: Sergey Pokhodenko <sergey.pokhodenko@intel.com> Co-authored-by: Vyacheslav-Smirnov <51660067+Vyacheslav-Smirnov@users.noreply.github.com> Co-authored-by: Alexey Kozlov <52973316+kozlov-alexey@users.noreply.github.com> Co-authored-by: Vyacheslav Smirnov <vyacheslav.s.smirnov@intel.com>
1 parent 81f6e78 commit 2559bb8

File tree

9 files changed

+1309
-415
lines changed

9 files changed

+1309
-415
lines changed

sdc/datatypes/hpat_pandas_dataframe_functions.py

Lines changed: 290 additions & 301 deletions
Large diffs are not rendered by default.

sdc/datatypes/hpat_pandas_dataframe_rolling_functions.py

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -123,19 +123,25 @@ def df_rolling_method_other_df_codegen(method_name, self, other, args=None, kws=
123123
f' raise ValueError("Method rolling.{method_name}(). The object pairwise\\n expected: False, None")'
124124
]
125125

126-
data_length = 'len(self._data._data[0])' if data_columns else '0'
127-
other_length = 'len(other._data[0])' if other_columns else '0'
126+
data_length = 'len(self._data._data[0][0])' if data_columns else '0'
127+
other_length = 'len(other._data[0][0])' if other_columns else '0'
128128
func_lines += [f' length = max([{data_length}, {other_length}])']
129129

130130
for col in all_columns:
131131
res_data = f'result_data_{col}'
132132
if col in common_columns:
133+
col_loc = self.data.column_loc[col]
134+
type_id, col_id = col_loc.type_id, col_loc.col_id
135+
other_col_loc = other.column_loc[col]
136+
other_type_id = other_col_loc.type_id
137+
other_col_id = other_col_loc.col_id
138+
133139
other_series = f'other_series_{col}'
134140
method_kws['other'] = other_series
135141
method_params = ', '.join(args + kwsparams2list(method_kws))
136142
func_lines += [
137-
f' data_{col} = self._data._data[{data_columns[col]}]',
138-
f' other_data_{col} = other._data[{other_columns[col]}]',
143+
f' data_{col} = self._data._data[{type_id}][{col_id}]',
144+
f' other_data_{col} = other._data[{other_type_id}][{other_col_id}]',
139145
f' series_{col} = pandas.Series(data_{col})',
140146
f' {other_series} = pandas.Series(other_data_{col})',
141147
f' rolling_{col} = series_{col}.rolling({rolling_params})',
@@ -158,16 +164,18 @@ def df_rolling_method_other_df_codegen(method_name, self, other, args=None, kws=
158164
return func_text, global_vars
159165

160166

161-
def df_rolling_method_main_codegen(method_params, df_columns, method_name):
167+
def df_rolling_method_main_codegen(method_params, df_columns, column_loc, method_name):
162168
rolling_params = df_rolling_params_codegen()
163169
method_params_as_str = ', '.join(method_params)
164170

165171
results = []
166172
func_lines = []
167173
for idx, col in enumerate(df_columns):
174+
col_loc = column_loc[col]
175+
type_id, col_id = col_loc.type_id, col_loc.col_id
168176
res_data = f'result_data_{col}'
169177
func_lines += [
170-
f' data_{col} = self._data._data[{idx}]',
178+
f' data_{col} = self._data._data[{type_id}][{col_id}]',
171179
f' series_{col} = pandas.Series(data_{col})',
172180
f' rolling_{col} = series_{col}.rolling({rolling_params})',
173181
f' result_{col} = rolling_{col}.{method_name}({method_params_as_str})',
@@ -204,7 +212,9 @@ def df_rolling_method_other_none_codegen(method_name, self, args=None, kws=None)
204212
f' raise ValueError("Method rolling.{_method_name}(). The object pairwise\\n expected: False")'
205213
]
206214
method_params = args + ['{}={}'.format(k, k) for k in kwargs if k != 'other']
207-
func_lines += df_rolling_method_main_codegen(method_params, self.data.columns, method_name)
215+
func_lines += df_rolling_method_main_codegen(method_params, self.data.columns, self.data.column_loc,
216+
method_name)
217+
208218
func_text = '\n'.join(func_lines)
209219

210220
global_vars = {'pandas': pandas}
@@ -229,7 +239,8 @@ def df_rolling_method_codegen(method_name, self, args=None, kws=None):
229239
func_lines = [f'def {impl_name}({impl_params_as_str}):']
230240

231241
method_params = args + ['{}={}'.format(k, k) for k in kwargs]
232-
func_lines += df_rolling_method_main_codegen(method_params, self.data.columns, method_name)
242+
func_lines += df_rolling_method_main_codegen(method_params, self.data.columns,
243+
self.data.column_loc, method_name)
233244
func_text = '\n'.join(func_lines)
234245

235246
global_vars = {'pandas': pandas}

sdc/datatypes/hpat_pandas_groupby_functions.py

Lines changed: 26 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -38,16 +38,13 @@
3838
from numba.core import cgutils
3939
from numba.extending import intrinsic
4040
from numba.core.registry import cpu_target
41-
from numba.typed import List, Dict
4241
from numba.core.typing import signature
4342
from numba import literally
4443

4544
from sdc.datatypes.common_functions import sdc_arrays_argsort, _sdc_asarray, _sdc_take
4645
from sdc.datatypes.hpat_pandas_groupby_types import DataFrameGroupByType, SeriesGroupByType
4746
from sdc.utilities.sdc_typing_utils import TypeChecker, kwsparams2list, sigparams2list
48-
from sdc.utilities.utils import (sdc_overload, sdc_overload_method, sdc_register_jitable,
49-
sdc_register_jitable)
50-
from sdc.hiframes.pd_dataframe_ext import get_dataframe_data
47+
from sdc.utilities.utils import (sdc_overload, sdc_overload_method, sdc_register_jitable)
5148
from sdc.hiframes.pd_series_type import SeriesType
5249
from sdc.str_ext import string_type
5350

@@ -155,7 +152,14 @@ def sdc_pandas_dataframe_getitem(self, idx):
155152
and all(isinstance(a, types.StringLiteral) for a in idx))):
156153

157154
by_col_id_literal = self.col_id.literal_value
158-
target_col_id_literal = self.parent.columns.index(idx.literal_value) if idx_is_literal_str else None
155+
by_col_loc = self.parent.column_loc[self.parent.columns[by_col_id_literal]]
156+
by_type_id, by_col_id = by_col_loc.type_id, by_col_loc.col_id
157+
158+
if idx_is_literal_str:
159+
target_col_id_literal = self.parent.columns.index(idx.literal_value)
160+
target_col_loc = self.parent.column_loc[self.parent.columns[target_col_id_literal]]
161+
target_type_id, target_col_id = target_col_loc.type_id, target_col_loc.col_id
162+
159163
def sdc_pandas_dataframe_getitem_common_impl(self, idx):
160164

161165
# calling getitem twice raises IndexError, just as in pandas
@@ -165,10 +169,10 @@ def sdc_pandas_dataframe_getitem_common_impl(self, idx):
165169
if idx_is_literal_str == True: # noqa
166170
# no need to pass index into this series, as we group by array
167171
target_series = pandas.Series(
168-
data=self._parent._data[target_col_id_literal],
172+
data=self._parent._data[target_type_id][target_col_id],
169173
name=self._parent._columns[target_col_id_literal]
170174
)
171-
by_arr_data = self._parent._data[by_col_id_literal]
175+
by_arr_data = self._parent._data[by_type_id][by_col_id]
172176
return init_series_groupby(target_series, by_arr_data, self._data, self._sort)
173177
else:
174178
return init_dataframe_groupby(self._parent, by_col_id_literal, self._data, self._sort, idx)
@@ -184,8 +188,8 @@ def sdc_pandas_dataframe_getitem_idx_unicode_str_impl(self, idx):
184188
return None
185189

186190

187-
def _sdc_pandas_groupby_generic_func_codegen(func_name, columns, func_params, defaults, impl_params):
188-
191+
def _sdc_pandas_groupby_generic_func_codegen(func_name, columns, column_loc,
192+
func_params, defaults, impl_params):
189193
all_params_as_str = ', '.join(sigparams2list(func_params, defaults))
190194
extra_impl_params = ', '.join(kwsparams2list(impl_params))
191195

@@ -204,15 +208,18 @@ def _sdc_pandas_groupby_generic_func_codegen(func_name, columns, func_params, de
204208
]
205209

206210
# TODO: remove conversion from Numba typed.List to reflected one while creating group_arr_{i}
207-
func_lines.extend(['\n'.join([
208-
f' result_data_{i} = numpy.empty(res_index_len, dtype=res_arrays_dtypes[{i}])',
209-
f' column_data_{i} = {df}._data[{column_ids[i]}]',
210-
f' for j in numpy.arange(res_index_len):',
211-
f' idx = argsorted_index[j] if {groupby_param_sort} else j',
212-
f' group_arr_{i} = _sdc_take(column_data_{i}, list({groupby_dict}[group_keys[idx]]))',
213-
f' group_series_{i} = pandas.Series(group_arr_{i})',
214-
f' result_data_{i}[j] = group_series_{i}.{func_name}({extra_impl_params})',
215-
]) for i in range(len(columns))])
211+
for i in range(len(columns)):
212+
col_loc = column_loc[column_names[i]]
213+
type_id, col_id = col_loc.type_id, col_loc.col_id
214+
func_lines += [
215+
f' result_data_{i} = numpy.empty(res_index_len, dtype=res_arrays_dtypes[{i}])',
216+
f' column_data_{i} = {df}._data[{type_id}][{col_id}]',
217+
f' for j in numpy.arange(res_index_len):',
218+
f' idx = argsorted_index[j] if {groupby_param_sort} else j',
219+
f' group_arr_{i} = _sdc_take(column_data_{i}, list({groupby_dict}[group_keys[idx]]))',
220+
f' group_series_{i} = pandas.Series(group_arr_{i})',
221+
f' result_data_{i}[j] = group_series_{i}.{func_name}({extra_impl_params})',
222+
]
216223

217224
data = ', '.join(f'\'{column_names[i]}\': result_data_{i}' for i in range(len(columns)))
218225
func_lines.extend(['\n'.join([
@@ -314,7 +321,7 @@ def sdc_pandas_dataframe_groupby_apply_func(self, func_name, func_args, defaults
314321

315322
groupby_func_name = f'_dataframe_groupby_{func_name}_impl'
316323
func_text, global_vars = _sdc_pandas_groupby_generic_func_codegen(
317-
func_name, subject_columns, func_args, defaults, impl_args)
324+
func_name, subject_columns, self.parent.column_loc, func_args, defaults, impl_args)
318325

319326
# capture result column types into generated func context
320327
global_vars['res_arrays_dtypes'] = res_arrays_dtypes

sdc/hiframes/boxing.py

Lines changed: 74 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,8 @@
4848
from sdc.hiframes.pd_series_ext import SeriesType
4949
from sdc.hiframes.pd_series_type import _get_series_array_type
5050

51+
from sdc.hiframes.pd_dataframe_ext import get_structure_maps
52+
5153
from .. import hstr_ext
5254
import llvmlite.binding as ll
5355
from llvmlite import ir as lir
@@ -58,12 +60,14 @@
5860

5961
@typeof_impl.register(pd.DataFrame)
6062
def typeof_pd_dataframe(val, c):
63+
6164
col_names = tuple(val.columns.tolist())
6265
# TODO: support other types like string and timestamp
6366
col_types = get_hiframes_dtypes(val)
6467
index_type = _infer_index_type(val.index)
68+
column_loc, _, _ = get_structure_maps(col_types, col_names)
6569

66-
return DataFrameType(col_types, index_type, col_names, True)
70+
return DataFrameType(col_types, index_type, col_names, True, column_loc=column_loc)
6771

6872

6973
# register series types for import
@@ -86,21 +90,55 @@ def unbox_dataframe(typ, val, c):
8690
# create dataframe struct and store values
8791
dataframe = cgutils.create_struct_proxy(typ)(c.context, c.builder)
8892

89-
column_tup = c.context.make_tuple(
90-
c.builder, types.UniTuple(string_type, n_cols), column_strs)
93+
errorptr = cgutils.alloca_once_value(c.builder, cgutils.false_bit)
9194

92-
# this unboxes all DF columns so that no column unboxing occurs later
93-
for col_ind in range(n_cols):
94-
series_obj = c.pyapi.object_getattr_string(val, typ.columns[col_ind])
95-
arr_obj = c.pyapi.object_getattr_string(series_obj, "values")
96-
ty_series = typ.data[col_ind]
97-
if isinstance(ty_series, types.Array):
98-
native_val = unbox_array(typ.data[col_ind], arr_obj, c)
99-
elif ty_series == string_array_type:
100-
native_val = unbox_str_series(string_array_type, series_obj, c)
95+
col_list_type = types.List(string_type)
96+
ok, inst = listobj.ListInstance.allocate_ex(c.context, c.builder, col_list_type, n_cols)
10197

102-
dataframe.data = c.builder.insert_value(
103-
dataframe.data, native_val.value, col_ind)
98+
with c.builder.if_else(ok, likely=True) as (if_ok, if_not_ok):
99+
with if_ok:
100+
inst.size = c.context.get_constant(types.intp, n_cols)
101+
for i, column_str in enumerate(column_strs):
102+
inst.setitem(c.context.get_constant(types.intp, i), column_str, incref=False)
103+
dataframe.columns = inst.value
104+
105+
with if_not_ok:
106+
c.builder.store(cgutils.true_bit, errorptr)
107+
108+
# If an error occurred, drop the whole native list
109+
with c.builder.if_then(c.builder.load(errorptr)):
110+
c.context.nrt.decref(c.builder, col_list_type, inst.value)
111+
112+
_, data_typs_map, types_order = get_structure_maps(typ.data, typ.columns)
113+
114+
for col_typ in types_order:
115+
type_id, col_indices = data_typs_map[col_typ]
116+
n_type_cols = len(col_indices)
117+
list_type = types.List(col_typ)
118+
ok, inst = listobj.ListInstance.allocate_ex(c.context, c.builder, list_type, n_type_cols)
119+
120+
with c.builder.if_else(ok, likely=True) as (if_ok, if_not_ok):
121+
with if_ok:
122+
inst.size = c.context.get_constant(types.intp, n_type_cols)
123+
for i, col_idx in enumerate(col_indices):
124+
series_obj = c.pyapi.object_getattr_string(val, typ.columns[col_idx])
125+
arr_obj = c.pyapi.object_getattr_string(series_obj, "values")
126+
ty_series = typ.data[col_idx]
127+
if isinstance(ty_series, types.Array):
128+
native_val = unbox_array(typ.data[col_idx], arr_obj, c)
129+
elif ty_series == string_array_type:
130+
native_val = unbox_str_series(string_array_type, series_obj, c)
131+
132+
inst.setitem(c.context.get_constant(types.intp, i), native_val.value, incref=False)
133+
134+
dataframe.data = c.builder.insert_value(dataframe.data, inst.value, type_id)
135+
136+
with if_not_ok:
137+
c.builder.store(cgutils.true_bit, errorptr)
138+
139+
# If an error occurred, drop the whole native list
140+
with c.builder.if_then(c.builder.load(errorptr)):
141+
c.context.nrt.decref(c.builder, list_type, inst.value)
104142

105143
# TODO: support unboxing index
106144
if typ.index == types.none:
@@ -113,7 +151,6 @@ def unbox_dataframe(typ, val, c):
113151
index_data = c.pyapi.object_getattr_string(index_obj, "_data")
114152
dataframe.index = unbox_array(typ.index, index_data, c).value
115153

116-
dataframe.columns = column_tup
117154
dataframe.parent = val
118155

119156
# increase refcount of stored values
@@ -122,7 +159,7 @@ def unbox_dataframe(typ, val, c):
122159
for var in column_strs:
123160
c.context.nrt.incref(c.builder, string_type, var)
124161

125-
return NativeValue(dataframe._getvalue())
162+
return NativeValue(dataframe._getvalue(), is_error=c.builder.load(errorptr))
126163

127164

128165
def get_hiframes_dtypes(df):
@@ -202,15 +239,10 @@ def box_dataframe(typ, val, c):
202239
context = c.context
203240
builder = c.builder
204241

205-
n_cols = len(typ.columns)
206242
col_names = typ.columns
207243
arr_typs = typ.data
208-
dtypes = [a.dtype for a in arr_typs] # TODO: check Categorical
209244

210245
dataframe = cgutils.create_struct_proxy(typ)(context, builder, value=val)
211-
col_arrs = [builder.extract_value(dataframe.data, i) for i in range(n_cols)]
212-
# df unboxed from Python
213-
has_parent = cgutils.is_not_null(builder, dataframe.parent)
214246

215247
pyapi = c.pyapi
216248
# gil_state = pyapi.gil_ensure() # acquire GIL
@@ -219,28 +251,31 @@ def box_dataframe(typ, val, c):
219251
class_obj = pyapi.import_module_noblock(mod_name)
220252
df_dict = pyapi.dict_new()
221253

222-
for i, cname, arr, arr_typ, dtype in zip(range(n_cols), col_names, col_arrs, arr_typs, dtypes):
254+
arrays_list_objs = {}
255+
for cname, arr_typ in zip(col_names, arr_typs):
223256
# df['cname'] = boxed_arr
224257
# TODO: datetime.date, DatetimeIndex?
225258
name_str = context.insert_const_string(c.builder.module, cname)
226259
cname_obj = pyapi.string_from_string(name_str)
227260

228-
if dtype == string_type:
229-
arr_obj = box_str_arr(arr_typ, arr, c)
230-
elif isinstance(arr_typ, Categorical):
231-
arr_obj = box_Categorical(arr_typ, arr, c)
232-
# context.nrt.incref(builder, arr_typ, arr)
233-
elif dtype == types.List(string_type):
234-
arr_obj = box_list(list_string_array_type, arr, c)
235-
# context.nrt.incref(builder, arr_typ, arr) # TODO required?
236-
# pyapi.print_object(arr_obj)
237-
else:
238-
arr_obj = box_array(arr_typ, arr, c)
239-
# TODO: is incref required?
240-
# context.nrt.incref(builder, arr_typ, arr)
261+
col_loc = typ.column_loc[cname]
262+
type_id, col_id = col_loc.type_id, col_loc.col_id
263+
264+
# dataframe.data looks like a tuple(list(array))
265+
# e.g. ([array(int64, 1d, C), array(int64, 1d, C)], [array(float64, 1d, C)])
266+
arrays_list_obj = arrays_list_objs.get(type_id)
267+
if arrays_list_obj is None:
268+
list_typ = types.List(arr_typ)
269+
# extracting list from the tuple
270+
list_val = builder.extract_value(dataframe.data, type_id)
271+
# getting array from the list to box it then
272+
arrays_list_obj = box_list(list_typ, list_val, c)
273+
arrays_list_objs[type_id] = arrays_list_obj
274+
275+
# PyList_GetItem returns borrowed reference
276+
arr_obj = pyapi.list_getitem(arrays_list_obj, col_id)
241277
pyapi.dict_setitem(df_dict, cname_obj, arr_obj)
242278

243-
pyapi.decref(arr_obj)
244279
pyapi.decref(cname_obj)
245280

246281
df_obj = pyapi.call_method(class_obj, "DataFrame", (df_dict,))
@@ -252,6 +287,9 @@ def box_dataframe(typ, val, c):
252287
pyapi.object_setattr_string(df_obj, 'index', arr_obj)
253288
pyapi.decref(arr_obj)
254289

290+
for arrays_list_obj in arrays_list_objs.values():
291+
pyapi.decref(arrays_list_obj)
292+
255293
pyapi.decref(class_obj)
256294
# pyapi.gil_release(gil_state) # release GIL
257295
return df_obj

0 commit comments

Comments
 (0)