Workshop Computational Algebra 2020
Due to the ongoing pandemic, it was decided to cancel the two minisymposia “Algebraic Methods for the Sciences” and “Computeralgebra” at the DMV-Jahrestagung 2020. As a partial substitute, we organized a small workshop with a subset of the speakers originally invited to the minisymposia.
Location & registration
The workshop took place on 27 November 2020 starting at 13:00 CET. It was hosted virtually at TU Kaiserslautern, click here to join. Note that for data protection reasons, joining requires an access code. To get it, please register for the workshop by sending an email to Ingrid Dietz with subject “Workshop Computational Algebra 2020” and including your name and affiliation.
- 13:00: Words of welcome
- 13:05-13:45: Christian Eder (Kaiserslautern)
- 14:00-14:40: Kathlen Kohn (Stockholm)
- 15:00-15:50: Anna-Laura Sattelberger (Leipzig)
- 16:00-16:50: Rainer Sinn (Leipzig)
Christian Eder (Kaiserslautern)
msolve – A new open source software library for algebraic system solving
In this talk we present work-in-progress on a new symbolic solver based on Faugère’s F4 algorithm, an efficient sparse FGLM algorithm as well as an optimized univariate solver.
We present different, partly new methods and show the current status of the project. We also give insight on fast specialized linear algebra we use for our attempt. We compare our new software with state-of-the-art implementations in Maple and Magma.
The library is developed in plain C, it is open source and we plan to provide interfaces to various Computer Algebra Systems in the near future, starting with a Julia interface to the newly developed OSCAR.
This is a joint project with Mohab Safey El-Din, Jean-Charles Faugère and Jeremy Berthomieu from the PolSys Team in Paris, Wolfram Decker from the TU Kaiserslautern, Franz-Josef Pfreundt from the Fraunhofer ITWM in Kaiserslautern, and Bernd Sturmfels from the Max-Plank-Institute in Leipzig.
[ Slides ]
Kathlen Kohn (Stockholm)
The geometry of linear (convolutional) neural networks
A fundamental goal in the theory of deep learning is to explain why the optimization of the loss function of a neural network does not seem to be affected by the presence of non-global local minima. Even in the case of linear networks, most of the existing literature paints a purely analytical picture of the loss, and provides no explanation as to why such architectures exhibit no bad local minima. We explain the intrinsic geometric reasons for this behavior of linear networks.
For neural networks in general, we discuss the neuromanifold, i.e., the space of functions parameterized by a network with a fixed architecture. For instance, the neuromanifold of a linear fully-connected network is a determinantal variety, a classical object of study in algebraic geometry. We compare this with linear convolutional networks whose neuromanifolds are semi-algebraic sets whose boundaries are contained in discriminant loci.
This talk is based on joint work with Matthew Trager and Joan Bruna, as well as on ongoing work with Thomas Merkh, Guido Montúfar and Matthew Trager.
[ Slides ]
Anna-Laura Sattelberger (Leipzig)
Algebraic Analysis of the Hypergeometric Function 1F1 of a Matrix Argument
Hypergeometric functions are omnipresent in the sciences. A natural generalization are hypergeometric functions of a matrix argument. Already in 1970, Muirhead pointed out a connection of those functions to probability distributions in Statistics. Among others, he provides a system of linear partial differential operators annihilating 1F1. The function 1F1 can therefore be studied in terms of D-modules, where D denotes the Weyl algebra.
In an article with Paul Görlach and Christian Lehn, we formulate a conjecture for the combinatorial structure of the characteristic variety of the Weyl closure of Muirhead’s D-ideal which is supported by computational evidence as well as theoretical considerations. In particular, we determine the singular locus of Muirhead’s D-ideal. In this talk, I present the setting and the main results of our article.
[ Article ]
Rainer Sinn (Leipzig)
Realization spaces of polytopes
We study a simple model of the realization space of a polytope that is particularly accessible from a computational point of view. We mostly explore how far the implicit function theorem can carry us. Combined with quite elementary combinatorial arguments, this approach recovers most of the known results of “nice” realization spaces. We will see that techniques that compute local information for real varieties are crucial here and need to be improved to answer still open, basic questions in this area.
[ Slides ]