- mai 31, 2023
-
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
-
- juin 23, 2022
-
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
-
- juin 22, 2022
-
-
Gwenael SAMAIN a rédigé
This implements early pruning and screening for active set, in a similar fashion to what is done for the homotopy. Still a WIP because: 1. We don't have a smart dual generation (no rescaled dual of things like that), which means our dual bound is probably really bad. 2. No complete time measurement of both accelerations.
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
Make `fobj_featuresign` handle the l0 term on S1, and both `fobj_featuresign` and `calculr` use const references instead of passing by value.
-
Gwenael SAMAIN a rédigé
This is not used in homotopy, but will be used in the active set. Basically, this allows us to check with a finer grain what changed, to update the intermediate quantities (r, Br, Hr) only when necessary.
-
- mai 30, 2022
-
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
This commit achieves the feature parity between homotopy and icd. More precisely, before this commit icd already had dual pruning and screening, but the iteration modulo were hardcoded instead of exposed to the command line arguments. Also, some optimization were made just like homotopy: using the target_UB when it's better than the current primal iterate, and nest the screening computation inside the dual pruning block so that we don't try to compute the screening when the dual wasn't updated. Note that the dual rescale done in the homotopy couldn't be done in icd, at least not the same way, as we don't have "intermediary mu" like in homotopy.
-
Gwenael SAMAIN a rédigé
As there are already handful of commits of both CMake and plain Makefiles, and as we've reached performance parity, there is no need to keep & maintain plain Makefiles anymore, so this commit remove them from the tree.
-
Gwenael SAMAIN a rédigé
Detecting linear algebra libraries seem to be a bit clunky. As a brainless way to do it, I took the corresponding cmake scripts from Armadillo, and "tada, it works !". This would need a bit of streamlining to live in a better world.
-
- mai 03, 2022
-
-
Gwenael SAMAIN a rédigé
When we evaluate the lower bound, several algorithms can be used, among them (iterative) coordinate descent, coined as ICD in the code. The lower bound given at the end of the ICD iterations was too high by mu * Card(S1), due to the integration of that term in the dual objective function (used in every algorithm for computing the lower bound) not being retropropagated to the ICD specific code, meaning we added again this mu*Card(S1) in the ICD specifi code. This commit fixes this double addition.
-
Gwenael SAMAIN a rédigé
My previous change to 10^-8 was apparently defective: it severely crashes when running it on instances. In the lack of time & motivation to inspect this further right now, I revert to the previous epsilon given by Ramzi of 10^-3.
-
Gwenael SAMAIN a rédigé
The most used theoretical formulation of P2+0 is (with mu as penalty parameter): 1/2 ||y - Ax||^2 + mu ||x||_0 The 1/2 before the least-square term is handy when manipulating the gradient in the optimization algorithms. However, due to historical reasons, Mimosa was solving problems formulated as: ||y - Ax||^2 + mu ||x||_0 This is now fixed to the 1/2 version.
-
Gwenael SAMAIN a rédigé
The P2+0 (also denoted as L2pL0 in the code) problem is the penalized version of our sparse optimization problems: the objective function is composed of a least-square term and an l0 pseudo-norm, this l0 term being weighted by a parameter. Previously, this parameter was named lambda, but it conflicts with reserved name for some application problems, such as (sparse) spectral analysis, where lambda denotes a frequency in a spectrum. This renames the parameter _in the instance files_ as mu. WARNING: the code still "speaks" in terms of lambda.
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
The Gap-Safe screening rule has a radius which depends on the duality gap. It works with an arbitrary (feasible) primal point x, as well as an arbitrary dual point w. Moreover, the theoretical performance of the screening is better if we choose w such that w = Ax - y, because the radius of the screening decreases, from sqrt(2*Gap(x, w)) to sqrt(Gap(x, w)) (i.e. we saved a sqrt(2) factor). However, in the case of homotopy, the rescaled w = breakpoint_lambda / target_mu * (Ax - y) is able to produce a dual objective which is way better than w = Ax - y. So, the dual gap is really lower, at the cost of a sqrt(2) increase in the screening radius. Empirically, we are winning when using this dual rescale. This fixes the screening radius by adding the sqrt(2) factor, so that we can use any dual point, including the homotopy rescaled version.
-
- avr. 12, 2022
-
-
Gwenael SAMAIN a rédigé
The choice of the exploration strategy has an impact on the performance of the method, particularly when the problem is hard to solve, and bounds are not that good. To quickly proove the optimality of the solution, empirical evidences suggest to prefer heap_on_lb (that is, best-first search) instead of stack (that is, depth-first search). This commit changes the default strategy used in the handy scripts in this direction.
-
- avr. 07, 2022
-
-
Gwenael SAMAIN a rédigé
When we are screening out variables, we remove them from the SBar support explored during optimization. During the optimization, we may compute a dual objective value. This dual depends on SBar. The edge case of empty SBar wasn't taken into account before, leading to an exception when taking some max of an empty vector. This is now fixed: no SBar = don't take that maximum.
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
When creating the Context, a default value is assigned to TimeBBMax, which controls the maximum amount of seconds left to the solver. This commit raises this limit from 1000 sec to 3600 (= 1 hour).
-
Gwenael SAMAIN a rédigé
When creating a Context, sane default values are assigned to a number of options, including the warm_restart. To be conservative with respect to old behaviour, the warm restart is set to false by default.
-
- avr. 01, 2022
-
-
Gwenael SAMAIN a rédigé
CMake being what it is, it seems that CMAKE_CXX_FLAGS and CMAKE_CXX_FLAGS_RELEASE are not performing the same job, because CMake doesn't build in Release mode by default ? Anyway, the O3 flag is not anymore set into CMAKE_CXX_FLAGS_RELEASE, but into CMAKE_CXX_FLAGS. Works like a charm.
-
- mars 16, 2022
-
-
Gwenael SAMAIN a rédigé
Mimosa_l2pl0_listinst, Mimosa_l2pl0_pipeinst, Mimosa_l2pl0_1inst are scripted originally written for MimosaUnmix, the code version targetting sparse problems with positivity and sum to 1 constraints. This fixes the scripts calling MimosaUnmix instead of Mimosa.
-
- mars 04, 2022
-
-
Gwenael SAMAIN a rédigé
Before we used the old Hmatrix.dat,but H is more a signal processing notation, and not as general as A.
-
Gwenael SAMAIN a rédigé
This introduce three script dedicated to l2pl0 that ease invocation of Mimosa by predefining all the solver options, the only remaining thing being the place(s) to find the instance(s). CMakeLists has been updated so that these scripts are shipped into cpacks packages.
-
Gwenael SAMAIN a rédigé
`l2pl0_with_SBR` and `l2pl0_without_SBR` has been merged into a single `l2pl0`, to avoid newcomers surprises. The sbr boolean switch is still in the code if needed sometimes later though.
-
Gwenael SAMAIN a rédigé
heap_on_ls exploration strategy was already added back in the comments, but not in the end-user usage message.
-
- fév. 17, 2022
-
-
Gwenael SAMAIN a rédigé
Before this commit, cerr contained both the usual error output _and_ some useful execution metrics formatted as csv. Separating that csv from the standard output is useful because of the kind of interactive "I'm still alive" way to log things into standard output currently implemented. Throwing out that CSV output to standard error is just a semantic mess done because of previous (extreme) lazyness from my end. This commit adds an optional argument, the filename where the CSV data must be throwned to. If this argument is absent, the old way of throwing the CSV to standard error is kept, for backward compatibility purpose.
-
- jan. 25, 2022
-
-
Gwenael SAMAIN a rédigé
This commit is a small change in the way we get the list of instances to run. Before this commit, we are supposed to give an SNR, a K and an instance range. It therefore assumes that a given instance has an SNR, and a K. More precisely, the instance format MUST follow: SA_SNRxx_Kyy_instancezz Where xx, yy and zz are integers. This format is absolutely arbitrary, and comes from the first dataset used to test the code. After this commit, the arguments SNR, K, instance are removed. Instead, we get a list of instances, one instance per line, from stdin.
-
Gwenael SAMAIN a rédigé
Each command line argument has a small dedicated parsing section. As there is quite a lot of arguments, a comment recall what argument is being parsed at the beginning of the section. This was not yet the case for dual modulo and screening modulo arguments.
-
- jan. 18, 2022
-
-
Gwenael SAMAIN a rédigé
Heap_on_ls was previously absent from usage message, this is fixed. Also, Limited Discrepancy Search can be a good heuristic to guide the branch-and-bound, possibly better than Depth-first search. An LDS gives equivalent score to nodes which have the same number of right branches ("wrong" branches in the branching strategy sense) in their ancestry path. In our setting, this corresponds to the size of S0.
-
- jan. 11, 2022
-
-
Gwenael SAMAIN a rédigé
To leverage dual pruning, the current implementation uses a struct called relax_info as a return for l1 algorithms functions. This relax_info contains, among other things, whether the node must be pruned or not. This structure was created simply with a: ``` relax_info ret ``` And then, if we are in the correct case, ret.prune is set to true. This assumes the above declaration initialize ret.prune to false. This is not the case for every compiler-os-platform combination. This commit fixes this by forcing the initialization to false.
-
- jan. 07, 2022
-
-
Gwenael SAMAIN a rédigé
Before that patch, the active set numerical zero (aka the threshold under which we disregard the value and say it's 0) was way too high. This patch makes this threshold a bit more sensible.
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
-
Gwenael SAMAIN a rédigé
This patch allows logging detailed traces of per node metrics. For now we are only talking about dual pruning and screening in the lower bounding stage, but this can be further extended according to needs.
-
Gwenael SAMAIN a rédigé
Before, nodes didn't have any names, preventing them to be uniquely identified. Now, the name as the format: T[rl]* "T" denotes the root node, "r" denotes the right child of a parent node, "l" the left child. So, "Trl" is the left child of the right child of the root: T / \ Tl Tr / \ Trl Trr
-
Gwenael SAMAIN a rédigé
Inside the homotopy, how many times should we compute the dual, and when we compute it, should we compute the GapSafe screening ? Previously, the answers to these questions were hardcoded. Now, they are command line (mandatory) arguments.
-