@@ -156,13 +156,9 @@ dtype : Type name or dict of column -> type, default ``None``
156156 Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32, 'c': 'Int64'} ``
157157 Use ``str `` or ``object `` together with suitable ``na_values `` settings to preserve
158158 and not interpret dtype. If converters are specified, they will be applied INSTEAD
159- of dtype conversion.
160-
161- .. versionadded :: 1.5.0
162-
163- Support for defaultdict was added. Specify a defaultdict as input where
164- the default determines the dtype of the columns which are not explicitly
165- listed.
159+ of dtype conversion. Specify a defaultdict as input where
160+ the default determines the dtype of the columns which are not explicitly
161+ listed.
166162
167163dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
168164 Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
@@ -177,12 +173,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
177173engine : {``'c' ``, ``'python' ``, ``'pyarrow' ``}
178174 Parser engine to use. The C and pyarrow engines are faster, while the python engine
179175 is currently more feature-complete. Multithreading is currently only supported by
180- the pyarrow engine.
181-
182- .. versionadded :: 1.4.0
183-
184- The "pyarrow" engine was added as an *experimental * engine, and some features
185- are unsupported, or may not work correctly, with this engine.
176+ the pyarrow engine. Some features of the "pyarrow" engine
177+ are unsupported or may not work correctly.
186178converters : dict, default ``None ``
187179 Dict of functions for converting values in certain columns. Keys can either be
188180 integers or column labels.
@@ -355,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
355347 - 'warn', print a warning when a bad line is encountered and skip that line.
356348 - 'skip', skip bad lines without raising or warning when they are encountered.
357349
358- .. versionadded :: 1.3.0
359-
360350.. _io.dtypes :
361351
362352Specifying column data types
@@ -935,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
935925 Writing CSVs to binary file objects
936926+++++++++++++++++++++++++++++++++++
937927
938- .. versionadded :: 1.2.0
939-
940928``df.to_csv(..., mode="wb") `` allows writing a CSV to a file object
941929opened binary mode. In most cases, it is not necessary to specify
942930``mode `` as pandas will auto-detect whether the file object is
@@ -1122,8 +1110,6 @@ You can elect to skip bad lines:
11221110 data = " a,b,c\n 1,2,3\n 4,5,6,7\n 8,9,10"
11231111 pd.read_csv(StringIO(data), on_bad_lines = " skip" )
11241112
1125- .. versionadded :: 1.4.0
1126-
11271113 Or pass a callable function to handle the bad line if ``engine="python" ``.
11281114The bad line will be a list of strings that was split by the ``sep ``:
11291115
@@ -1547,8 +1533,6 @@ functions - the following example shows reading a CSV file:
15471533
15481534 df = pd.read_csv(" https://download.bls.gov/pub/time.series/cu/cu.item" , sep = " \t " )
15491535
1550- .. versionadded :: 1.3.0
1551-
15521536 A custom header can be sent alongside HTTP(s) requests by passing a dictionary
15531537of header key value mappings to the ``storage_options `` keyword argument as shown below:
15541538
@@ -1600,8 +1584,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
16001584If you do *not * have S3 credentials, you can still access public
16011585data by specifying an anonymous connection, such as
16021586
1603- .. versionadded :: 1.2.0
1604-
16051587.. code-block :: python
16061588
16071589 pd.read_csv(
@@ -2535,8 +2517,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
25352517 df[(" GitHub" , None )]
25362518 df[(" GitHub" , None )].str[1 ]
25372519
2538- .. versionadded :: 1.5.0
2539-
25402520 .. _io.html :
25412521
25422522Writing to HTML files
@@ -2726,8 +2706,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
27262706LaTeX
27272707-----
27282708
2729- .. versionadded :: 1.3.0
2730-
27312709Currently there are no methods to read from LaTeX, only output methods.
27322710
27332711Writing to LaTeX files
@@ -2766,8 +2744,6 @@ XML
27662744Reading XML
27672745'''''''''''
27682746
2769- .. versionadded :: 1.3.0
2770-
27712747The top-level :func: `~pandas.io.xml.read_xml ` function can accept an XML
27722748string/file/URL and will parse nodes and attributes into a pandas ``DataFrame ``.
27732749
@@ -3093,8 +3069,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
30933069which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
30943070without holding entire tree in memory.
30953071
3096- .. versionadded :: 1.5.0
3097-
30983072.. _`lxml's iterparse` : https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
30993073.. _`etree's iterparse` : https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse
31003074
@@ -3133,8 +3107,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
31333107Writing XML
31343108'''''''''''
31353109
3136- .. versionadded :: 1.3.0
3137-
31383110``DataFrame `` objects have an instance method ``to_xml `` which renders the
31393111contents of the ``DataFrame `` as an XML document.
31403112
0 commit comments