Merged update from upstream sources

This is an automated DistroBaker update from upstream sources.
If you do not know what this is about or would like to opt out,
contact the OSCI team.

Source: https://src.fedoraproject.org/rpms/scipy.git#0a4506f8b7780273ea7ee7c4f72219024045aed7
This commit is contained in:
DistroBaker 2021-03-29 18:40:18 +00:00
parent c0a67f7304
commit 00f091d0cf
6 changed files with 32 additions and 129 deletions

2
.gitignore vendored
View File

@ -33,3 +33,5 @@ scipy-0.7.2.tar.gz
/scipy-1.5.3.tar.gz
/scipy-1.5.4.tar.gz
/scipy-1.6.0.tar.gz
/scipy-1.6.1.tar.gz
/scipy-1.6.2.tar.gz

View File

@ -14,8 +14,8 @@
Summary: Scientific Tools for Python
Name: scipy
Version: 1.6.0
Release: 3%{?dist}
Version: 1.6.2
Release: 1%{?dist}
# BSD -- whole package except:
# Boost -- scipy/special/cephes/scipy_iv.c
@ -24,9 +24,6 @@ License: BSD and Boost and Public Domain
Url: http://www.scipy.org/scipylib/index.html
Source0: https://github.com/scipy/scipy/releases/download/v%{version}/scipy-%{version}.tar.gz
# https://github.com/scipy/scipy/pull/13387
Patch0: wavfile.patch
BuildRequires: fftw-devel, suitesparse-devel
BuildRequires: %{blaslib}-devel
BuildRequires: gcc-gfortran, swig, gcc-c++
@ -128,6 +125,25 @@ for PY in %{python3_version}; do
%endif
done
# FIXME: shared objects built from Fortran sources contain RPATH, find a way to prevent that
# scipy/integrate/_odepack
# scipy/integrate/_quadpack
# scipy/integrate/_test_odeint_banded
# scipy/integrate/lsoda
# scipy/integrate/vode
# scipy/linalg/_fblas
# scipy/linalg/_flapack
# scipy/linalg/_flinalg
# scipy/linalg/_interpolative
# scipy/linalg/cython_blas
# scipy/linalg/cython_lapack
# scipy/odr/__odrpack
# scipy/optimize/_lbfgsb
# scipy/sparse/linalg/eigen/arpack/_arpack
# scipy/sparse/linalg/isolve/_iterative
# scipy/special/_ufuncs
# scipy/special/cython_special
%install
%py3_install
# Some files got ambiguous python shebangs, we fix them after everything else is done
@ -177,6 +193,14 @@ popd
%endif
%changelog
* Thu Mar 25 2021 Nikola Forró <nforro@redhat.com> - 1.6.2-1
- New upstream release 1.6.2
resolves: #1942896
* Thu Feb 18 2021 Nikola Forró <nforro@redhat.com> - 1.6.1-1
- New upstream release 1.6.1
resolves: #1929994
* Wed Feb 03 2021 Nikola Forró <nforro@redhat.com> - 1.6.0-3
- Increase test timeout on s390x

View File

@ -1,47 +0,0 @@
From ea0a77cf8761a8b8636b93314139ed0fc0a9d1db Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Nikola=20Forr=C3=B3?= <nforro@redhat.com>
Date: Wed, 30 Sep 2020 11:44:25 +0200
Subject: [PATCH] TST: make a couple of tests expected to fail on 32-bit
architectures
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
In TestConstructUtils.test_concatenate_int32_overflow
and test_nnz_overflow, on a 32-bit architecture, in case
check_free_memory() passes, ValueError is raised on an attempt
to create a numpy array too large for a 32-bit architecture.
Signed-off-by: Nikola Forró <nforro@redhat.com>
---
scipy/sparse/tests/test_construct.py | 1 +
scipy/sparse/tests/test_sparsetools.py | 1 +
2 files changed, 2 insertions(+)
diff --git a/scipy/sparse/tests/test_construct.py b/scipy/sparse/tests/test_construct.py
index 3a882c6cc..5a2b92667 100644
--- a/scipy/sparse/tests/test_construct.py
+++ b/scipy/sparse/tests/test_construct.py
@@ -378,6 +378,7 @@ class TestConstructUtils(object):
excinfo.match(r'Got blocks\[0,1\]\.shape\[0\] == 1, expected 2')
@pytest.mark.slow
+ @pytest.mark.xfail_on_32bit("Can't create large array for test")
def test_concatenate_int32_overflow(self):
""" test for indptr overflow when concatenating matrices """
check_free_memory(30000)
diff --git a/scipy/sparse/tests/test_sparsetools.py b/scipy/sparse/tests/test_sparsetools.py
index 0c208ef44..e95df1ba0 100644
--- a/scipy/sparse/tests/test_sparsetools.py
+++ b/scipy/sparse/tests/test_sparsetools.py
@@ -61,6 +61,7 @@ def test_regression_std_vector_dtypes():
@pytest.mark.slow
+@pytest.mark.xfail_on_32bit("Can't create large array for test")
def test_nnz_overflow():
# Regression test for gh-7230 / gh-7871, checking that coo_todense
# with nnz > int32max doesn't overflow.
--
2.26.2

View File

@ -1,40 +0,0 @@
From eabd8ea25fe291665f37fd069a1c574cd30d12cc Mon Sep 17 00:00:00 2001
From: Victor Stinner <vstinner@python.org>
Date: Wed, 25 Nov 2020 11:41:15 +0100
Subject: [PATCH] GH-13122: Skip factorial() float tests on Python 3.10
special.factorial() argument should be an array of integers.
On Python 3.10, math.factorial() reject float.
On Python 3.9, a DeprecationWarning is emitted.
A numpy array casts all integers to float if the array contains a
single NaN.
---
scipy/special/tests/test_basic.py | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/scipy/special/tests/test_basic.py b/scipy/special/tests/test_basic.py
index 9b7260e8435..e2ae29812a5 100644
--- a/scipy/special/tests/test_basic.py
+++ b/scipy/special/tests/test_basic.py
@@ -19,6 +19,7 @@
import itertools
import platform
+import sys
import numpy as np
from numpy import (array, isnan, r_, arange, finfo, pi, sin, cos, tan, exp,
@@ -1822,6 +1823,13 @@ def test_nan_inputs(self, x, exact):
result = special.factorial(x, exact=exact)
assert_(np.isnan(result))
+ # GH-13122: special.factorial() argument should be an array of integers.
+ # On Python 3.10, math.factorial() reject float.
+ # On Python 3.9, a DeprecationWarning is emitted.
+ # A numpy array casts all integers to float if the array contains a
+ # single NaN.
+ @pytest.mark.skipif(sys.version_info >= (3, 10),
+ reason="Python 3.10+ math.factorial() requires int")
def test_mixed_nan_inputs(self):
x = np.array([np.nan, 1, 2, 3, np.nan])
with suppress_warnings() as sup:

View File

@ -1 +1 @@
SHA512 (scipy-1.6.0.tar.gz) = 995ffaf56b713cdd4bdb98d8525b892e9ad84a511878b43213cb71a67f34d87c111da36cf1e0b044c75c0d5af64bfde4ad0f3e9c5e71cae2dbf053251f37064e
SHA512 (scipy-1.6.2.tar.gz) = 18b03f32e8343c5a6c6148ac0bfd4b5f2cc9ff5f74d5d41761ae9e773d6af8774c7b09a3fcc47122864eccce1dbbc17e9325819885d3fc3ab2baf98e7d3befa4

View File

@ -1,36 +0,0 @@
commit 09d753f0ae71441906f5cee7a44b2d2b80212082
Author: Nikola Forró <nforro@redhat.com>
Date: Thu Jan 14 14:34:14 2021 +0100
ENH: Support big-endian platforms and big-endian WAVs
PR #12287 added support for reading arbitrary-bit-depth WAVs, but
the code doesn't consider big-endian WAVs, and doesn't work as expected
on big-endian platforms due to the use of native-byte-order data-types.
This change fixes that.
There is also a simple test case that compares euqivalent RIFX
(big-endian) and RIFF (little-endian) files to verify the data read
are the same.
diff --git a/scipy/io/wavfile.py b/scipy/io/wavfile.py
index 9b5845d6b..951f8d201 100644
--- a/scipy/io/wavfile.py
+++ b/scipy/io/wavfile.py
@@ -458,10 +458,13 @@ def _read_data_chunk(fid, format_tag, channels, bit_depth, is_big_endian,
if dtype == 'V1':
# Rearrange raw bytes into smallest compatible numpy dtype
- dt = numpy.int32 if bytes_per_sample == 3 else numpy.int64
- a = numpy.zeros((len(data) // bytes_per_sample, dt().itemsize),
+ dt = f'{fmt}i4' if bytes_per_sample == 3 else f'{fmt}i8'
+ a = numpy.zeros((len(data) // bytes_per_sample, numpy.dtype(dt).itemsize),
dtype='V1')
- a[:, -bytes_per_sample:] = data.reshape((-1, bytes_per_sample))
+ if is_big_endian:
+ a[:, :bytes_per_sample] = data.reshape((-1, bytes_per_sample))
+ else:
+ a[:, -bytes_per_sample:] = data.reshape((-1, bytes_per_sample))
data = a.view(dt).reshape(a.shape[:-1])
else:
if bytes_per_sample in {1, 2, 4, 8}: