Named after Euclid, who first recorded it, the Euclidean algorithm is one of the oldest and greatest algorithms known to mathematics. Its simplicity and effectiveness still set a standard for modern mathematicians.
The subject of this topic is not the Euclidean algorithm  you can read about it here  although it is certainly an inspiration for this discussion about a deceptively simple but powerful method for generating composite numbers using the subtraction of perfect squares. The method is also a means for determining primality by elimination: a technique that does not require trial division or modular arithmetic.
The Basic Idea
The initial idea comes from the following observation: The number that immediately precedes a perfect square is never prime with the single exception of 4  1 = 3.

Perfect Square 
Perfect Square  1 

4 (2 * 2) 
3 

9 (3 * 3) 
8 (2 * 2 * 2) 

16 (4 * 4) 
15 (3 * 5) 

25 (5 * 5) 
24 (2 * 2 * 2 * 3) 

36 (6 * 6) 
35 (5 * 7) 

49 (7 * 7) 
48 (2 * 2 * 2 * 2 * 3) 

64 (8 * 8) 
63 (3 * 3 * 7) 

81 (9 * 9) 
80 (2 * 2 * 2 * 2 * 5) 

100 (10 * 19) 
99 (3 * 3 * 11) 
Now 1 is 1, but it's also the first perfect square: 1 * 1 = 1, so is it the oneness of 1 that ensures that PS 1 is always always a composite number  or is it because of its property of squareness...?
It seemed possible that if this rule held for PS1, it might hold for PSN, where N is another perfect square  just maybe....
The huge surprise is that the rule holds true for every perfect square. In every case  with the important and irregular exception of the first subtraction (the first row in the table), which we will talk about in a moment  the numbers produced by subtraction of perfect squares are composite numbers:
PS 
PS1 
PS 
PS4 
PS 
PS9 
PS 
PS16 
PS 
P25 
PS 
PS36 
PS 
PS49 
4 
3 
9 
5 
16 
7 
25 
9 
36 
11 
49 
13 
64 
15 
9 
8 
16 
12 
25 
16 
36 
20 
49 
24 
64 
28 
81 
32 
16 
15 
25 
21 
36 
27 
49 
33 
64 
39 
81 
45 
100 
51 
25 
24 
36 
32 
49 
40 
64 
48 
81 
56 
100 
64 
121 
72 
36 
35 
49 
45 
64 
55 
81 
65 
100 
75 
121 
85 
144 
95 
49 
48 
64 
60 
81 
72 
100 
84 
121 
96 
144 
108 
169 
120 
64 
63 
81 
77 
100 
91 
121 
105 
144 
119 
169 
133 
196 
147 
81 
80 
100 
96 
121 
112 
144 
128 
169 
144 
196 
160 
225 
176 
100 
99 
121 
117 
144 
135 
169 
153 
196 
171 
225 
189 
256 
207 
So, to make sure you understand what this table shows, you're looking at a matrix of perfect squares (and the results of subtracting them). The perfect squares are increasing by one square in every row and in every other column. The columns are one perfect square less than the corresponding squares in the rows  and all we're doing is simply subtracting them and showing the result in the alternating columns.
For clarity, let's refer to the perfect squares that are being subtracted as "minussquares"  so if 16  9 = 7 and 25  9 = 16, the minussquare is 9 in both cases:
Now notice that the perfect squares in the 1st, 3rd, 5th, 7th, 9th, and 11th columns are shifting up each time by one square. That's simply because the minussquare cannot be greater than the perfect square. So when subtracting 4, the first perfect square to be considered is 9; when subtracting 9, the first perfect square to be considered is 16, and so on.
The first row
The first row is created by the first subtraction: the minussquare is one square smaller than the perfect square: 41, 94, 169, 2516, 3625, 4936, 6449, and 8164. Let's look a little harder at what's happening in the firstrow results:
It actually looks pretty simple: the results are increasing by 2 for each minussquare. That happens to be exactly the difference in the increase between perfect squares  more about what this difference means in this article.
It turns out that 9  4 = 5 is a prime number of course. In other words, this is a single exception (like 4  1 = 3) to the otherwise compositeness of all perfect squares minus 4. Similarly, 36  25 = 11 and 64  49 = 12 are also exceptions to the otherwise uniform compositeness of their respective columns.
However 25  26 = 9 and 81  64 = 15 are definitely composite. There are no exceptions to compositeness in these columns.
If we were hoping to find some kind of transparent pattern to the exceptions we will be disappointed. But we should not give up too easily.... Because it so happens that this table has a very nice property  the property of repetition or reoccurrence....
PS 
PS1 
PS 
PS4 
PS 
PS9 
PS 
PS16 
PS 
P25 
PS 
PS36 
PS 
PS49 
4 
3 
9 
5 
16 
7 
25 
9 
36 
11 
49 
13 
64 
15 
9 
8 
16 
12 
25 
16 
36 
20 
49 
24 
64 
28 
81 
32 
16 
15 
25 
21 
36 
27 
49 
33 
64 
39 
81 
45 
100 
51 
Prime numbers can only occur once in this table, but composites can occur once, twice, or more than two times. So in this tiny sample, 9 cannot be a prime because it has already occurred in the table as a perfect square (the product of 3 * 3). 15 cannot be a prime because it has already occurred in the table as the composite product of a perfect square subtraction (16  1).
Thus we have a twopart test for primality:
 A positive requirement: The number must occur in the first perfect square subtraction (the first row).
 A negative requirement: The number cannot already have occurred, either as as perfect square or as the product of perfect square subtraction.
That's it in a nutshell. You now have a complete algorithm for determining compositeness and primality that requires neither division nor exponentiation.
Some open questions
One big open question is how many rows does the table need to have to ensure that the compositeness of the firstrow nonprimes can be proved by prior occurrence. This is important as it determines the efficiency of the algorithm for proving primes.
The number of rows needed to prove composites progressively increases, from about 20 rows for primes under 1000, to about 50 rows for primes under 5000, to about 100 rows for primes under 10000. But this is very approximate, and does not take into account that the row depth requirement actually decreases almost geometrically after the first few columns. This will be a subject of the next column on this topic.
However, if your aim is to safely generate the largest number of composites, you can go as deep as you can in terms of number of rows. You can be assured that every number generated will be composite except those that appear in the very first row. The arithmetic operations are so simple that, even though the numbers rapidly become very large as the row depth increases, the performance impact is not even a consideration.
There are a few complexities that I didn't mention. All composite numbers appear unless they are alternate multiples of 2  for example: 6, 10, and 14 do not. Multiples of 4 do appear  for example, 8, 12, and 16. The reason for this is not that simple to explain and may be related to the concept of quadratic reciprocity. This needs to be researched.
Another feature of the table are the factor "rays" (or curves) that emanate from the top left corner and traverse without breaks through the table at various angles. Shown in orange below, these are numbers that are evenly divisible by their respective minussquares  thus, 9 is a factor of 27, 72 and 135. Notice that the interval between multiples of a perfect square increase by row (or square) with each successive column (or minussquare).
PS 
PS1 
PS 
PS4 
PS 
PS9 
PS 
PS16 
PS 
P25 
PS 
PS36 
PS 
PS49 
4 
3 
9 
5 
16 
7 
25 
9 
36 
11 
49 
13 
64 
15 
9 
8 
16 
12 
25 
16 
36 
20 
49 
24 
64 
28 
81 
32 
16 
15 
25 
21 
36 
27 
49 
33 
64 
39 
81 
45 
100 
51 
25 
24 
36 
32 
49 
40 
64 
48 
81 
56 
100 
64 
121 
72 
36 
35 
49 
45 
64 
55 
81 
65 
100 
75 
121 
85 
144 
95 
49 
48 
64 
60 
81 
72 
100 
84 
121 
96 
144 
108 
169 
120 
64 
63 
81 
77 
100 
91 
121 
105 
144 
119 
169 
133 
196 
147 
81 
80 
100 
96 
121 
112 
144 
128 
169 
144 
196 
160 
225 
176 
100 
99 
121 
117 
144 
135 
169 
153 
196 
171 
225 
189 
256 
207 
It appears that, for the odd minussquare columns, a number that is not a multiple of the minussquare is always coprime, or relatively prime, to the minussquare. This needs to be verified.
Well, all that remains is to make the code available to demonstrate these findings... Enjoy!
All you have to do is to select everything in the text area above and drag or copy it to the Excel Visual Basic Editor, dropping it into the VBAProject ThisWookbook object. Save and exit the file. Then reopen it, and the code will autostart providing that your macro security is not high or very high.
Part 2: A Deterministic Test for Compositeness Using Perfect Squares... 