This is a staging site. Uploads will not persist. Testing only.
libsecp256k1-2
library for EC operations on curve secp256k1
An optimized C library for EC operations on curve secp256k1.
lib++dfb-1.7-7t64
direct frame buffer graphics (++DFB shared library)
DirectFB is a graphics library which was designed with embedded systems
in mind. It offers maximum hardware accelerated performance at a minimum
of resource usage and overhead.
libdirectfb-1.7-7t64
direct frame buffer graphics (shared libraries)
DirectFB is a graphics library which was designed with embedded systems
in mind. It offers maximum hardware accelerated performance at a minimum
of resource usage and overhead.
python3-text-unidecode
most basic Python port of the Text::Unidecode Perl library (Python3 version)
This library is an alternative of other Python ports of Text::Unidecode
(unidecode and isounidecode).
unidecode (in Debian available as python3-unidecode) is licensed under GPL;
isounidecode uses too much memory, and it also didn’t support Python 3 while
text-unidecode was created.
r-cran-riskregression
GNU R Risk Regression Models and Prediction Scores for Survival
Analysis with Competing Risks Implementation of the following methods
for event history analysis. Risk regression models for survival
endpoints also in the presence of competing risks are fitted using
binomial regression based on a time sequence of binary event status
variables. A formula interface for the Fine-Gray regression model and an
interface for the combination of cause-specific Cox regression models. A
toolbox for assessing and comparing performance of risk predictions
(risk markers and risk prediction models). Prediction performance is
measured by the Brier score and the area under the ROC curve for binary
possibly time-dependent outcome. Inverse probability of censoring
weighting and pseudo values are used to deal with right censored data.
Lists of risk markers and lists of risk models are assessed
simultaneously. Cross-validation repeatedly splits the data, trains the
risk prediction models on one part of each split and then summarizes and
compares the performance across splits.