## Project Euler Problem #144

### April 1, 2013

Read the details of the problem here

**Summary**

Investigating multiple reflections of a laser beam.

**Solution**

A nice little problem and really helped by Euler providing the slope tangent for points on the ellipse. It’s a matter of finding the intersection point between the vector denoting the beam’s path and the ellipse equation and then reflecting it around the normal (at right angles to the tangent) of the mirror’s slope at that point (using the dot and cross products). This just repeated until the beam intersection is within a suitable tolerance of the entry/exit gap at the top.

def ( answer, epsilon ) = [ 0, 0.01d ] def p0 = [ x:0.0d, y:10.1d ] def p1 = [ x:1.4d, y:-9.6d ] while ((answer < 1) || !((Math.abs(p1.x) <= epsilon && (p1.y > 0.0)))) { answer++ def a = [ x:p0.x - p1.x, y:p0.y - p1.y ] def b = [ x:-4.0 * p1.x, y:-p1.y] def ( dotp, crossp ) = [ a.x * b.x + a.y * b.y, a.x * b.y - b.x * a.y ] def c = [ x: dotp * b.x - crossp * b.y, y: crossp * b.x + dotp * b.y ] def s = (-2.0 * (4.0 * c.x * p1.x + c.y * p1.y)) / (4.0 * (c.x * c.x) + (c.y * c.y)) def p2 = [ x:s * c.x + p1.x, y:s * c.y + p1.y ] ( p0, p1 ) = [ p1, p2 ] }

This executes in Groovy 1.8.8 under Java 7u17 in around 0.12 seconds, so well within my 5 second cut-off.

**Conclusion**

Groovy made for quite a terse solution to this problem. The ability to use tuples with named members as lightweight point and vector classes was quite convenient and made for more readable code. The multiple assignment capability kept the LoC count down too. It’s important to include the ’d’ type signifiers on the starting values to force the use of *doubles* throughout the calculations. This is one of those Gr-r-r-oovy annoyances that I’d prefer not having to worry about though.

## Rolling the Groovy Dice

### January 6, 2013

Bit of a throw away post but it might save someone 5 minutes… Nothing to do with Project Euler…

I needed to have a quick way of rolling an arbitrary number of polyhedral dice including the ability to take a number of the highest from all those rolled.

I did this in Groovy by adding an overloaded `roll()`

instance method to Integer along with a helper `isPolyhedralDie()`

method (but only to catch me making some sort of basic typo in the calling code).

Integer.metaClass.isPolyhedralDie() { (delegate as Integer) in [ 3, 4, 6, 8, 10, 12, 20, 100 ] } Integer.metaClass.roll() { delegate.roll(1) } Integer.metaClass.roll() { dice, best = dice ->; assert delegate.isPolyhedralDie() && (best > 0) && (dice >= best) def ( sides, rnd ) = [ delegate, new Random() ] (1..dice).collect { rnd.nextInt(sides) + 1 } .sort().reverse()[0..<best].sum() } // Generate 4x 3/4d6 character attribute sets 4.times { def r = (1..6).collect { 6.roll(4,3) } r.countBy { it }.sort { a, b -> b.key <=> a.key } .each { k, v -> print "${v}x $k, " } println "Sum = ${r.sum()}" }

Usage:

Integer#roll() // 6.roll() = roll 1d6 Integer#roll(noOfDice) // 6.roll(3) = roll 3d6 Integer#roll(noOfDice, noOfBest) // 6.roll(4,3) = roll 4d6 take 3

## Groovin’ in Java

### March 7, 2012

I’ve just been watching an interesting presentation on using Groovy as an additional toolkit in Java. Groovy excels in handling XML, File I/O and JDBC, as well as offering good support for Collections and Closures (as regular readers of this blog already know!). The type of things you typically need to do day-in-day-out for your regular job.

If you want to learn more about what you can do with Groovy in the real world take a look at the presentation on InfoQ – Improve Your Java with Groovy

The presenter, Ken Kousen, also has a good blog with some very useful information on using Groovy for doing regular & proper programming tasks – the sort of thing it’s really intended for – not what I do here!

He’s got a book in the works, Making Java Groovy, which should be coming out in late 2012.

## Project Euler Problem #64

### March 4, 2012

Read the details of the problem here

**Summary**

How many continued fractions for N ≤ 10000 have an odd period?

**Solution**

It took a little while for me to understand how this was working so, to make it clearer this is essentially what’s required.

- Take the square root of the original number then…
- The integer portion becomes the next number in the sequence,
- Take the reciprocal of the remaining decimal portion,
- Take this number as input into Step 2.

Obviously, at step 3, if the decimal portion has been seen before, that sequence would just repeat from then on, so a cycle would have been found. In that respect it’s very similar to the solution for Problem #26.

As an example, here’s the trace for 23, with it’s square root – 4.79583.

n | Digit | Remainder |
---|---|---|

4.79583 | 4 | 0.79583 |

1.25655 | 1 | 0.25655 |

3.89792 | 3 | 0.89792 |

1.11369 | 1 | 0.11369 |

8.79583 | 8 | 0.79583 |

1.25655 | 1 | 0.25655 |

Whilst this works as an implemented algorithm for numbers with short sequences it breaks down on the longer ones, some of which are over 200 elements long, due to problems with precision. So I expanded out the algorithm to better reflect the way that Euler describes in the question and coded that up. I made the assumption that the sequence would start from the first position rather than an arbitrary one.

def cf(n) { def ( i, j, c, k ) = [ 0, 1, 0, null ] def sqrt_n = Math.sqrt(n) while (true) { j = (n - (i * i)).intdiv(j) i = -(i - ((int)(sqrt_n + i)).intdiv(j) * j) if ([i, j] == k) return c - 1 if (c == 1) k = [i, j] c++ } } def answer = (2..10000).count { (((int)Math.sqrt(it)) ** 2 != it) && (cf(it) % 2) }

This executes in Groovy 1.8.6 under Java 7u3 on my Samsung 700G7A i7-2670QM 2.2 GHz machine in around 0.45 seconds, so well within my 5 second cut-off (reduced from the old 15 seconds on my old machine). This runs quite fast as it only ever uses integers – apart from the initial sqrt calculation.

**Conclusion**

Didn’t use many Groovy features for this solution in the main apart from keeping the function initialization quite terse and the count listop on the number range. Performance was fairly good – though an equivalent Java solution does run in 55 ms – around 8 times faster.

## Project Euler Problem #65

### January 29, 2012

Read the details of the problem here

**Summary**

Find the sum of digits in the numerator of the 100^{th} convergent of the continued fraction for *e*.

**Solution**

This question immediately reminded me of Problem #57 which ended up being quite a simple solution though it sounded involved initially. I took a look at that, and adopted a similar approach to this one.

The trick with this problem was to spot the pattern as to how the numerator changed down the sequence. As it turned out, the values of the denominator are superflous to this and can just be ignored for this purpose.

Taking the first ten terms as given (with a zeroth term for completeness) given this is how it works:

Term | Numerator | Multiplier | Formula |
---|---|---|---|

0 | 1 | ||

1 | 2 | 2 | 2 * num_{0} |

2 | 3 | 1 | 1 * num_{1} + num_{0} |

3 | 8 | 2 | 2 * num_{2} + num_{1} |

4 | 11 | 1 | 1 * num_{3} + num_{2} |

5 | 19 | 1 | 1 * num_{4} + num_{3} |

6 | 87 | 4 | 4 * num_{5} + num_{4} |

7 | 106 | 1 | 1 * num_{6} + num_{5} |

8 | 193 | 1 | 1 * num_{7} + num_{6} |

9 | 1264 | 6 | 6 * num_{8} + num_{7} |

10 | 1457 | 1 | 1 * num_{9} + num_{8} |

So this can be summarised as:

num_{n} = mul_{n} * num_{n-1} + num_{n-2}

Where *n* is the term being calculated and mul_{n} is the n^{th} element of the continued fraction. This is simply *n* / 3 * 2 when *n* mod 3 == 0, and 1 otherwise.

Given that the multipliers work in triplets, this can be used to advantage and reduce the number of calculations required. I created a function that given the n-1 and n-2 numerators, and a term number could calculate the next three numerators in the sequence. It meant that I didn’t have to do any modulus calculations as long as I kept the boundary aligned correctly i.e. only call it for terms, 2, 5, 8, etc. which were the start of the triplets.

Starting at term 2, I then called this recursively until it was being asked to calculate the 101^{st} term. At this point the 100^{th} term would be the n-1 numerator value and could be returned.

def f(p1, p2, tc) { if (tc == 101) return p1 def m = p1 + p2 def n = ((tc + 1).intdiv(3) * 2) * m + p1 f(n + m, n, tc + 3) } def answer = (f(2G, 1G, 2).toString() as List)*.toInteger().sum()

This runs in around 115 ms, with sum of the digits taking about 10 ms. A port of this code to Java runs in around 1.3 ms in total.

**Conclusion**

Groovy was a nice language to do this in. Not having to mess about with BigInteger objects explicitly is always a boon in the Java world. Performance was quite disappointing though – given that f() is only ever called with BigInteger arguments the compiler should be able to work out exactly what types the variables are and the generated bytecode should be pretty similar to that which javac outputs…but it’s not. In this case Groovy is nearly two orders of magnitude slower and I really don’t see a good reason for it at this stage in the language maturity.

## Project Euler Problem #243

### January 21, 2012

Read the details of the problem here

**Summary**

Find the smallest denominator *d*, having a resilience *R*(*d*) < 15499 ⁄ 94744

**Solution**

Due to real-life intrusion I haven’t been doing much Project Euler for the last few months but a conversation with a colleague at work the other day brought the subject up and I thought I’d revisit the site.

They’ve changed their format quite a lot now and though it’s still fundamentally the same, the concept of “badges” has been introduced – “awards” above and beyond just the old numbers of puzzles solved (which is now more granular). I’m a sucker for things like that and one of them – “Trinary Triumph” – just needed one more question to complete so it was, as the saying goes, like a red rag to a bull!

The problem was actually worded quite straightforwardly, and a little thought showed that this was very similar to Problem #69 and Problem #70. The numerator in the resilience function is going to be the count of terms that are relatively prime to the denominator i.e. Euler’s Totient (φ) function. The solution to Problem #69 was found by maximising n/φ(n) whereas, in this problem we’re looking to minimize φ(d)/(d-1) – so it’s essentially the same thing. The answer is likely to be found by looking for a number consisting of the product of many small prime factors perhaps with some multiplier (also a product of small primes) i.e. some of those factors may be of a higher power.

Problem #108 and Problem #110 also tackled the domain of products of powers of primes and the solution to the latter seemed a good place to start as the one to the former ended up as being a bit of a lucky hack. As per that solution, taking the approach of assuming that the answer was to be found somewhere at n * product(prime factors) and driving off that meant that factorisation of large numbers wasn’t needed so Groovy would remain a viable language to do this in.

Again, when doing these problems with Groovy, it’s best to try to work out the potential scope ahead of runtime as it doesn’t perform that well at number crunching.

What I did need to how to calculate the totient value for numbers with large numbers of factors – I’d kind of worked it out from a bit of guess work for two factors when doing Problem #70 but this wouldn’t work for this. Luckily Wikipedia came to the rescue giving the formula quite succinctly as:

φ(n) = n(1-1/p_{1})(1-1/p_{2})…(1-1/p_{r}).

Using this I found that the product of a prime sequence 2..23 gave a totient above what was being sought, whilst 2..29 was a potential solution itself. The answer probably fell somewhere between these two and this therefore formed the search space.

I then just reused the helper function I’d used in Problem #110 with a bit of an educated guess as to what the maximum power might be. If the multipler was 2^{5} i.e. 32, this would exceed the solution of the longer prime sequence – so a maximum power of 4 seemed reasonable to use. I also seeded the generator function with the assumption that all of the primes between 2..23 were going to be factors – this seemed a reasonable assumption given what I knew of the totient function characteristics.

def gen_powers(range, maxlen, yield) { gen_powers(range, maxlen, [], yield) } def gen_powers(range, maxlen, pows, yield) { if (pows.size() == maxlen) yield(pows) else if (pows.size() == 0) range.each { gen_powers(range, maxlen, [it], yield) } else range.findAll { it >= pows[0] } .each { gen_powers(range, maxlen, [it]+pows, yield) } } def ( answer, TARGET, POWER_RANGE ) = [ Long.MAX_VALUE, 15499/94744, (0..4) ] def primes = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ] gen_powers(POWER_RANGE, primes.size(), [1] * primes.size()- 1) { powers -> def (d, factors) = [ 1G, [] ] powers.eachWithIndex { v, i -> d *= primes[i] ** v if (v > 0) factors << primes[i] } if ((d > 1) && (answer > d) && (((int)(factors.inject(d) { p, v -> p * (1 - 1/v) })) / (d-1) < TARGET)) { answer = d } }

Despite the heavy use of the slow Groovy listops (and a bit of hacky code) this runs in around 940 ms.

Next will be Problem #65, a question on continued fractions, that will give me my “As Easy As Pi” award!

**Conclusion**

With the upfront analysis on the problem and the re-use of the power generator, Groovy made it easy to put together a terse solution that ran in a reasonable time. When doing the calculation of the value of the denominator I did find myself missing Python’s map() capability to process **two** lists against a closure and return a result. Sounds like the sort of thing that should be in there, maybe it’s buried in the Groovy docs somewhere…

## A 360° View of 3D

### September 21, 2011

The time has come at last to splash out on one of the new fangled plasma or LED TV sets.

However, my ancient 28" Panasonic QuintrixF box with the excellent Tau flat-screen, having done 10 years of faithful service (albeit with a couple of repairs during that time), has started to show tell-tale signs of impending failure – the Dolby Surround circuitry lost the centre channel a year ago and the picture does an occasional wobble at the top.

I would get it repaired but I’ve a feeling that finding parts might be problematic now and, besides, there are some excellent new TVs on the market (or so I’m told) that should do the job AND they’re 3D capable too.

I’ve put off buying a new TV over the last few years, even when true 1080p HD came out as the picture quality on these sets, even with all their HD-ness, just wasn’t up to the QuintrixF Tau screen. When it came to watching standard definition (SD) video on them, they couldn’t hold a candle to my Panasonic CRT TV.

But it’s time to look around. I’m a bit of sceptic when it comes to the marketing-hype of consumer electronics and like to see the products in real-life use.

The up-market screens from pretty much all the major manufacturers are very capable now, though I’d argue that their HD picture quality is no better to my current TV, and are much improved in up-scaling SD signals so as to avoid that apparently blurry image especially prevalent on low-bitrate channels.

Buying such an up-market screen generally means getting a 3D-capable set as the manufacturers pair their best panels with the best electronics. I imagine that this seriously over-weights the statistics of people "buying into" 3D technology over the last year so.

With this in mind, I’ve spent the last few months pestering friends and relatives who did take the plunge last Christmas and bought a 3D capable set back then. So now, having now watched quite a bit of 3D footage, courtesy of their hospitality, I’ve made some observations of the medium.

Apart from the technical issues, such as faded colours, image cross-talk, the physical restrictions of passive 3D glasses and the costs of active 3D glasses that people often talk about, there are four perceptual problems of 3D that I’ve noticed. This is just my opinion, but it’s my blog, so here they are:

**Lack of Near-Field Parallax **I’m not sure if this is what it is really called but it seems quite descriptive of the effect. When an object is in the foreground and is apparently close to the viewer, if that viewer moves their head they’d expect the object to move in relation to the background, revealing new areas and obscuring others. Obviously as the picture is actually flat and only presents a static 3D view this doesn’t happen and spoils the immersive experience.

This isn’t such a problem with more distant objects as a viewer expects much less parallax to occur between such an object and the background. The best way around this seems to sit still and watch TV as if you’re in the cinema – maybe some people do this but, personally, I don’t.

**Assuming an Infinite Depth of Field **In standard 2D content the director typically denotes the main subject of the scene by focusing on them with a shallow depth of field. The background appears deliberately out of focus but that OK as it’s pretty much how our eyes work.

In 3D content there seems to be a preference to demonstrate its 3D nature by having an infinite depth of field with everything in focus, even when a "special" 3D effect is used, as is the wont in the traditional realm of 3D horror films. This practice makes the next two effects seem worse than they might otherwise be.

**Billboarding **I don’t believe that this is a correct term but it’s quite descriptive of the apparent effect. In short, current 3D technologies don’t seem to capable of providing a completely smooth depth transition between objects in a scene. I don’t know if this is a feature of the cameras or the TVs.

This shortcoming leads to "billboarding" – in that it appears that a 2D picture of a 3D scene has been pasted onto a billboard which is then set at a distance into the scene. This technique used in video game development to reduce the computational complexity in rendering.

The scene is seen to be composed of a set of these "planes" rather than being smoothly graduated. In worst cases, this effect is even seen on single objects. For example, on one demo the viewer was taken across the Pont Sant’Angelo in Rome (which I recommend visiting sometime) and there were close ups on the 17C angelic statues.

Unfortunately, although the picture was excellent, the statues appeared dislocated. Where a hand was pointing “out” of the screen it was disjointed from the forearm, which was on another "plane", and this extended to the shoulder, wings, etc. It was as if it was a Channel 4 ident with pieces of stone suspended on invisible wires and visible as a solid body only when viewed from a specific angle.

**Focal Distance Adjustment **This seems to sort of related to the "Infinite Depth of Field" issue I’ve mentioned above. In order to "believe" a 3D scene, it seems that my eyes have to be fooled into assuming that the objects are a certain distance away. When the scene shifts whereby the "focus" is now apparently at a different distance but remains in focus without my eyes having to re-adjust, it seems wrong. At best this just destroys the illusion of 3D until I "lock in" again but, at worst, it makes me feel a little seasick!

So what do I believe this means for 3D TV?

The technology will continue to improve so the technical concerns will probably be consigned to history within the next few generations of the systems themselves.

The limitations of physical equipment will be solved as, I imagine, should the "billboarding" effect described above – unless this is actually some form of physiological limitation of human cognitive processing.

The issue with lack of near field parallax will probably drive the adoption of VERY large 60”+ screens that can be placed farther away from the viewer so that the problem is just less pronounced. Unless the TV can generate a private 3D image for each viewer and use some form of individual eye tracking I don’t see how else this might be tackled.

The other issues are really a matter of direction style. The modern way of shooting 2D TV seems to to use close in cameras with rapidly changing viewpoints in order to engage the viewer and make them feel like they’re actually part of the action. What works well for 2D simply doesn’t work for 3D. The rapidly changing perspective just leads to disorientation and destroys the immersive 3D experience.

I believe that, for 3D, viewers need to be treated more like a theatre audience as passive onlookers onto a scene. Viewpoints need to be established and held in order for the audience to “lock in” to the scene’s perspective. This means that the 2D version of a film won’t just be a single-eye image of a 3D film but, for a large part, a differently shot piece of work. Obviously this will push up production costs. I don’t like to think that we’ll be seeing 2D movies going the same way as black and white movies and only being shot as art nouveau retro pieces.

In short, 3D holds a lot of promise, especially in the gaming market where the scene is being generated on the fly, but for general viewing I’m really not convinced that there’s a great need at the moment (sport may be an exception though) until the content production industry works out how to handle the artistic differences between 2D and 3D in order to get the most from both mediums. There is some fantastic 3D content out there but not enough to warrant buying a 3D set specifically to see it.