3SAT is the canonical NP-complete problem. In 3SAT, we have a set of variables v1, v2, v3, ... and a set of clauses like (v1 or not v3 or v4). Each clause contains exactly 3 variables and the goal is to find a way to assign boolean values to all the variables so that all the clauses are satisfied. Since 3SAT is NP-complete, we don't know any algorithm that can always solve it efficiently. Thus, we have to fall back on heuristics, which is where neural networks ought to work well.
This algorithm is described in [1]. More details on all of these algorithms can be found there. It is an evolutionary algorithm. The network is used to determine the quality of the intermediate solutions. In this algorithm, we keep a single candidate solution. At each step, we try mutating it lambda times. A single mutation means flipping the value of one variable. The best of these is the new candidate.
The network takes a single input for each clause. It gets a 1 if the clause is satisfied and a 0 otherwise. The network is trained on the fly for the specific 3SAT instance that we're trying to solve. Since we're training it as we go, we want to keep it simple and fast. Thus, we use a single neuron. When we update the weights, we simply increment the weight of each clause that is not satisfied. The result is that the algorithm will end up prioritizing clauses that are rarely satisfied. If a group of clauses interfere with each other in a way that makes it hard to satisfy all of them, then their weights will tend to increase over time.
The Neural Satisfaction algorithm takes a significantly different approach to the problem. The idea is to train the network on the clauses, and eventually extract information from the network to determine the values of the variables.
The network has a single input for each variable. The input represents a clause. As shown in the figure, if a variable is not present in the clause, the corresponding input value is 0. If a variable is present positively in the clause, then the input is 1. If a variable is present negatively, the the input is -1. The output is 1 if any of the inputs has the same sign as the corresponding weight. The update rule adjusts the weights for variables found in the clause by shifting them towards the input. If the clause is satisfied, then only weights with the same sign as the input are updated.
When we finish training, we find the solution by setting a variable to true if the corresponding weight is positive and false otherwise.
Lamarcking SEA-SAW combines ideas from the two prior algorithms. The core of the algorithm is based on SAW-ing EA, but at each step, it runs a single step of a discrete form of the Neural Satisfaction algorithm and throws the result into the possible candidates.
I ran each algorithm 10 times in each configuration. The problems were randomly generated and have solutions.
SAW-ing EA does reasonably well up to a fairly large problem.
| Number of variables | Number of clauses | Success rate |
| 20 | 40 | 100% |
| 40 | 80 | 90% |
| 80 | 160 | 70% |
For Neural Satisfaction, the results weren't very good.
| Number of variables | Number of clauses | Success rate |
| 20 | 40 | 20% |
| 40 | 80 | 30% |
| 80 | 160 | 0% |
Lamarckian SEA-SAW behaves similarly to SAW-ing EA, but a little bit better, as we might expect.
| Number of variables | Number of clauses | Success rate |
| 20 | 40 | 100% |
| 40 | 80 | 100% |
| 80 | 160 | 80% |
These results partly confirm the results of the paper. The hybrid algorithm Lamarckian SEA-SAW indeed has the best results of the three algorithms. On the other hand, the results that I got for Neural Satisfaction were much worse than their original results. I'm not quite sure why this is. I used the same value of eta that they did, and the problems that I tested with used fewer clauses for the number of variables, so they should have been easier.
3SAT-v n The number of variables in the 3SAT instance -c n The number of clauses in the 3SAT instance -i n The maximum number of iterations of the algorithm -s Use the SAW-ing EA algorithm -n Use the Neural Satisfaction algorithm -l Use the Lamarckian SEA-SAW algorithm -eta n The eta parameter of Neural Satisfaction -lambda n The lambda parameter of SAW-ing EA and Lamarckian SEA-SAW -el n The l parameter of Lamarckian SEA-SAW -h Print a help message and exit
The source code for this tool can be downloaded here. The tool is written in C++. It should be possible to compile it with any reasonably conformant compiler. The code has been tested with Microsoft Visual C++ 2010 and GCC 4.4.1. The code also depends on Boost version 1.43 or later.