@@ -158,12 +158,6 @@ dtype : Type name or dict of column -> type, default ``None``
158158 and not interpret dtype. If converters are specified, they will be applied INSTEAD
159159 of dtype conversion.
160160
161- .. versionadded :: 1.5.0
162-
163- Support for defaultdict was added. Specify a defaultdict as input where
164- the default determines the dtype of the columns which are not explicitly
165- listed.
166-
167161dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
168162 Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
169163 arrays, nullable dtypes are used for all dtypes that have a nullable
@@ -177,12 +171,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
177171engine : {``'c' ``, ``'python' ``, ``'pyarrow' ``}
178172 Parser engine to use. The C and pyarrow engines are faster, while the python engine
179173 is currently more feature-complete. Multithreading is currently only supported by
180- the pyarrow engine.
181-
182- .. versionadded :: 1.4.0
183-
184- The "pyarrow" engine was added as an *experimental * engine, and some features
185- are unsupported, or may not work correctly, with this engine.
174+ the pyarrow engine.The "pyarrow" engine was added as an *experimental * engine,
175+ and some features are unsupported, or may not work correctly, with this engine.
186176converters : dict, default ``None ``
187177 Dict of functions for converting values in certain columns. Keys can either be
188178 integers or column labels.
@@ -357,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
357347 - 'warn', print a warning when a bad line is encountered and skip that line.
358348 - 'skip', skip bad lines without raising or warning when they are encountered.
359349
360- .. versionadded :: 1.3.0
361-
362350.. _io.dtypes :
363351
364352Specifying column data types
@@ -937,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
937925 Writing CSVs to binary file objects
938926+++++++++++++++++++++++++++++++++++
939927
940- .. versionadded :: 1.2.0
941-
942928``df.to_csv(..., mode="wb") `` allows writing a CSV to a file object
943929opened binary mode. In most cases, it is not necessary to specify
944930``mode `` as pandas will auto-detect whether the file object is
@@ -1124,8 +1110,6 @@ You can elect to skip bad lines:
11241110 data = " a,b,c\n 1,2,3\n 4,5,6,7\n 8,9,10"
11251111 pd.read_csv(StringIO(data), on_bad_lines = " skip" )
11261112
1127- .. versionadded :: 1.4.0
1128-
11291113 Or pass a callable function to handle the bad line if ``engine="python" ``.
11301114The bad line will be a list of strings that was split by the ``sep ``:
11311115
@@ -1553,8 +1537,6 @@ functions - the following example shows reading a CSV file:
15531537
15541538 df = pd.read_csv(" https://download.bls.gov/pub/time.series/cu/cu.item" , sep = " \t " )
15551539
1556- .. versionadded :: 1.3.0
1557-
15581540 A custom header can be sent alongside HTTP(s) requests by passing a dictionary
15591541of header key value mappings to the ``storage_options `` keyword argument as shown below:
15601542
@@ -1606,8 +1588,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
16061588If you do *not * have S3 credentials, you can still access public
16071589data by specifying an anonymous connection, such as
16081590
1609- .. versionadded :: 1.2.0
1610-
16111591.. code-block :: python
16121592
16131593 pd.read_csv(
@@ -2541,8 +2521,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
25412521 df[(" GitHub" , None )]
25422522 df[(" GitHub" , None )].str[1 ]
25432523
2544- .. versionadded :: 1.5.0
2545-
25462524 .. _io.html :
25472525
25482526Writing to HTML files
@@ -2732,8 +2710,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
27322710LaTeX
27332711-----
27342712
2735- .. versionadded :: 1.3.0
2736-
27372713Currently there are no methods to read from LaTeX, only output methods.
27382714
27392715Writing to LaTeX files
@@ -2772,8 +2748,6 @@ XML
27722748Reading XML
27732749'''''''''''
27742750
2775- .. versionadded :: 1.3.0
2776-
27772751The top-level :func: `~pandas.io.xml.read_xml ` function can accept an XML
27782752string/file/URL and will parse nodes and attributes into a pandas ``DataFrame ``.
27792753
@@ -3099,8 +3073,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
30993073which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
31003074without holding entire tree in memory.
31013075
3102- .. versionadded :: 1.5.0
3103-
31043076.. _`lxml's iterparse` : https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
31053077.. _`etree's iterparse` : https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse
31063078
@@ -3139,8 +3111,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
31393111Writing XML
31403112'''''''''''
31413113
3142- .. versionadded :: 1.3.0
3143-
31443114``DataFrame `` objects have an instance method ``to_xml `` which renders the
31453115contents of the ``DataFrame `` as an XML document.
31463116
0 commit comments