Project Euler Problem #65

January 29, 2012

Read the details of the problem here

Summary

Find the sum of digits in the numerator of the 100th convergent of the continued fraction for e.

Solution

This question immediately reminded me of Problem #57 which ended up being quite a simple solution though it sounded involved initially. I took a look at that, and adopted a similar approach to this one.

The trick with this problem was to spot the pattern as to how the numerator changed down the sequence. As it turned out, the values of the denominator are superflous to this and can just be ignored for this purpose.

Taking the first ten terms as given (with a zeroth term for completeness) given this is how it works:

Term Numerator Multiplier Formula
0 1    
1 2 2 2 * num0
2 3 1 1 * num1 + num0
3 8 2 2 * num2 + num1
4 11 1 1 * num3 + num2
5 19 1 1 * num4 + num3
6 87 4 4 * num5 + num4
7 106 1 1 * num6 + num5
8 193 1 1 * num7 + num6
9 1264 6 6 * num8 + num7
10 1457 1 1 * num9 + num8

So this can be summarised as:

numn = muln * numn-1 + numn-2

Where n is the term being calculated and muln is the nth element of the continued fraction.  This is simply n / 3 * 2 when n mod 3 == 0, and 1 otherwise.

Given that the multipliers work in triplets, this can be used to advantage and reduce the number of calculations required. I created a function that given the n-1 and n-2 numerators, and a term number could calculate the next three numerators in the sequence. It meant that I didn’t have to do any modulus calculations as long as I kept the boundary aligned correctly i.e. only call it for terms, 2, 5, 8, etc. which were the start of the triplets.

Starting at term 2, I then called this recursively until it was being asked to calculate the 101st term. At this point the 100th term would be the n-1 numerator value and could be returned.

def f(p1, p2, tc) {
  if (tc == 101) return p1
  def m = p1 + p2
  def n = ((tc + 1).intdiv(3) * 2) * m + p1
  f(n + m, n, tc + 3)
}

def answer =
    (f(2G, 1G, 2).toString() as List)*.toInteger().sum()

This runs in around 115 ms, with sum of the digits taking about 10 ms. A port of this code to Java runs in around 1.3 ms in total.

Conclusion

Groovy was a nice language to do this in. Not having to mess about with BigInteger objects explicitly is always a boon in the Java world. Performance was quite disappointing though – given that f() is only ever called with BigInteger arguments the compiler should be able to work out exactly what types the variables are and the generated bytecode should be pretty similar to that which javac outputs…but it’s not. In this case Groovy is nearly two orders of magnitude slower and I really don’t see a good reason for it at this stage in the language maturity.

Advertisements

Project Euler Problem #243

January 21, 2012

Read the details of the problem here

Summary

Find the smallest denominator d, having a resilience R(d) < 15499 ⁄ 94744

Solution

Due to real-life intrusion I haven’t been doing much Project Euler for the last few months but a conversation with a colleague at work the other day brought the subject up and I thought I’d revisit the site.

They’ve changed their format quite a lot now and though it’s still fundamentally the same, the concept of “badges” has been introduced – “awards” above and beyond just the old numbers of puzzles solved (which is now more granular). I’m a sucker for things like that and one of them – “Trinary Triumph” – just needed one more question to complete so it was, as the saying goes, like a red rag to a bull!

The problem was actually worded quite straightforwardly, and a little thought showed that this was very similar to Problem #69 and Problem #70. The numerator in the resilience function is going to be the count of terms that are relatively prime to the denominator i.e. Euler’s Totient (φ) function. The solution to Problem #69 was found by maximising n/φ(n) whereas, in this problem we’re looking to minimize φ(d)/(d-1) – so it’s essentially the same thing. The answer is likely to be found by looking for a number consisting of the product of many small prime factors perhaps with some multiplier (also a product of small primes) i.e. some of those factors may be of a higher power.

Problem #108 and Problem #110 also tackled the domain of products of powers of primes and the solution to the latter seemed a good place to start as the one to the former ended up as being a bit of a lucky hack. As per that solution, taking the approach of assuming that the answer was to be found somewhere at n * product(prime factors) and driving off that meant that factorisation of large numbers wasn’t needed so Groovy would remain a viable language to do this in.

Again, when doing these problems with Groovy, it’s best to try to work out the potential scope ahead of runtime as it doesn’t perform that well at number crunching.

What I did need to how to calculate the totient value for numbers with large numbers of factors – I’d kind of worked it out from a bit of guess work for two factors when doing Problem #70 but this wouldn’t work for this. Luckily Wikipedia came to the rescue giving the formula quite succinctly as:

φ(n) = n(1-1/p1)(1-1/p2)…(1-1/pr).

Using this I found that the product of a prime sequence 2..23 gave a totient above what was being sought, whilst 2..29 was a potential solution itself. The answer probably fell somewhere between these two and this therefore formed the search space.

I then just reused the helper function I’d used in Problem #110 with a bit of an educated guess as to what the maximum power might be. If the multipler was 25 i.e. 32, this would exceed the solution of the longer prime sequence – so a maximum power of 4 seemed reasonable to use. I also seeded the generator function with the assumption that all of the primes between 2..23 were going to be factors – this seemed a reasonable assumption given what I knew of the totient function characteristics.

def gen_powers(range, maxlen, yield) {
  gen_powers(range, maxlen, [], yield)
}

def gen_powers(range, maxlen, pows, yield) {
  if (pows.size() == maxlen)
    yield(pows)
  else if (pows.size() == 0)
    range.each { gen_powers(range, maxlen, [it], yield) }
  else
    range.findAll { it >= pows[0] }
         .each { gen_powers(range, maxlen, [it]+pows, yield) }
}

def ( answer, TARGET, POWER_RANGE ) = [ Long.MAX_VALUE, 15499/94744, (0..4) ]
def primes = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ]

gen_powers(POWER_RANGE, primes.size(), [1] * primes.size()- 1) { powers ->
  def (d, factors) = [ 1G, [] ]
  powers.eachWithIndex { v, i ->
    d *= primes[i] ** v
    if (v > 0) factors << primes[i]
  }
  if ((d > 1) &&
      (answer > d) &&
      (((int)(factors.inject(d) { p, v -> p * (1 - 1/v) })) / (d-1) < TARGET)) {
    answer = d
  }
}

Despite the heavy use of the slow Groovy listops (and a bit of hacky code) this runs in around 940 ms.

Next will be Problem #65, a question on continued fractions, that will give me my “As Easy As Pi” award!

Conclusion

With the upfront analysis on the problem and the re-use of the power generator, Groovy made it easy to put together a terse solution that ran in a reasonable time. When doing the calculation of the value of the denominator I did find myself missing Python’s map() capability to process two lists against a closure and return a result. Sounds like the sort of thing that should be in there, maybe it’s buried in the Groovy docs somewhere…

A 360° View of 3D

September 21, 2011

The time has come at last to splash out on one of the new fangled plasma or LED TV sets.

However, my ancient 28" Panasonic QuintrixF box with the excellent Tau flat-screen, having done 10 years of faithful service (albeit with a couple of repairs during that time), has started to show tell-tale signs of impending failure – the Dolby Surround circuitry lost the centre channel a year ago and the picture does an occasional wobble at the top.

I would get it repaired but I’ve a feeling that finding parts might be problematic now and, besides, there are some excellent new TVs on the market (or so I’m told) that should do the job AND they’re 3D capable too.

I’ve put off buying a new TV over the last few years, even when true 1080p HD came out as the picture quality on these sets, even with all their HD-ness, just wasn’t up to the QuintrixF Tau screen. When it came to watching standard definition (SD) video on them, they couldn’t hold a candle to my Panasonic CRT TV.

But it’s time to look around.  I’m a bit of sceptic when it comes to the marketing-hype of consumer electronics and like to see the products in real-life use.

The up-market screens from pretty much all the major manufacturers are very capable now, though I’d argue that their HD picture quality is no better to my current TV, and are much improved in up-scaling SD signals so as to avoid that apparently blurry image especially prevalent on low-bitrate channels.

Buying such an up-market screen generally means getting a 3D-capable set as the manufacturers pair their best panels with the best electronics.  I imagine that this seriously over-weights the statistics of people "buying into" 3D technology over the last year so.

With this in mind, I’ve spent the last few months pestering friends and relatives who did take the plunge last Christmas and bought a 3D capable set back then.  So now, having now watched quite a bit of 3D footage, courtesy of their hospitality, I’ve made some observations of the medium.

Apart from the technical issues, such as faded colours, image cross-talk, the physical restrictions of passive 3D glasses and the costs of active 3D glasses that people often talk about, there are four perceptual problems of 3D that I’ve noticed.  This is just my opinion, but it’s my blog, so here they are:

Lack of Near-Field Parallax
I’m not sure if this is what it is really called but it seems quite descriptive of the effect.  When an object is in the foreground and is apparently close to the viewer, if that viewer moves their head they’d expect the object to move in relation to the background, revealing new areas and obscuring others.  Obviously as the picture is actually flat and only presents a static 3D view this doesn’t happen and spoils the immersive experience.

This isn’t such a problem with more distant objects as a viewer expects much less parallax to occur between such an object and the background.  The best way around this seems to sit still and watch TV as if you’re in the cinema – maybe some people do this but, personally, I don’t.

Assuming an Infinite Depth of Field
In standard 2D content the director typically denotes the main subject of the scene by focusing on them with a shallow depth of field. The background appears deliberately out of focus but that OK as it’s pretty much how our eyes work.

In 3D content there seems to be a preference to demonstrate its 3D nature by having an infinite depth of field with everything in focus, even when a "special" 3D effect is used, as is the wont in the traditional realm of 3D horror films.  This practice makes the next two effects seem worse than they might otherwise be.

Billboarding
I don’t believe that this is a correct term but it’s quite descriptive of the apparent effect.  In short, current 3D technologies don’t seem to capable of providing a completely smooth depth transition between objects in a scene.  I don’t know if this is a feature of the cameras or the TVs.

This shortcoming leads to "billboarding" – in that it appears that a 2D picture of a 3D scene has been pasted onto a billboard which is then set at a distance into the scene. This technique used in video game development to reduce the computational complexity in rendering.

The scene is seen to be composed of a set of these "planes" rather than being smoothly graduated.  In worst cases, this effect is even seen on single objects. For example, on one demo the viewer was taken across the Pont Sant’Angelo in Rome (which I recommend visiting sometime) and there were close ups on the 17C angelic statues.

Unfortunately, although the picture was excellent, the statues appeared dislocated. Where a hand was pointing “out” of the screen it was disjointed from the forearm, which was on another "plane", and this extended to the shoulder, wings, etc. It was as if it was a Channel 4 ident with pieces of stone suspended on invisible wires and visible as a solid body only when viewed from a specific angle.

Focal Distance Adjustment
This seems to sort of related to the "Infinite Depth of Field" issue I’ve mentioned above.  In order to "believe" a 3D scene, it seems that my eyes have to be fooled into assuming that the objects are a certain distance away.  When the scene shifts whereby the "focus" is now apparently at a different distance but remains in focus without my eyes having to re-adjust, it seems wrong. At best this just destroys the illusion of 3D until I "lock in" again but, at worst, it makes me feel a little seasick!

So what do I believe this means for 3D TV?

The technology will continue to improve so the technical concerns will probably be consigned to history within the next few generations of the systems themselves.

The limitations of physical equipment will be solved as, I imagine, should the "billboarding" effect described above – unless this is actually some form of physiological limitation of human cognitive processing.

The issue with lack of near field parallax will probably drive the adoption of VERY large 60”+ screens that can be placed farther away from the viewer so that the problem is just less pronounced.  Unless the TV can generate a private 3D image for each viewer and use some form of individual eye tracking I don’t see how else this might be tackled.

The other issues are really a matter of direction style.  The modern way of shooting 2D TV seems to to use close in cameras with rapidly changing viewpoints in order to engage the viewer and make them feel like they’re actually part of the action.  What works well for 2D simply doesn’t work for 3D.  The rapidly changing perspective just leads to disorientation and destroys the immersive 3D experience.

I believe that, for 3D, viewers need to be treated more like a theatre audience as passive onlookers onto a scene.  Viewpoints need to be established and held in order for the audience to “lock in” to the scene’s perspective.  This means that the 2D version of a film won’t just be a single-eye image of a 3D film but, for a large part, a differently shot piece of work.  Obviously this will push up production costs.  I don’t like to think that we’ll be seeing 2D movies going the same way as black and white movies and only being shot as art nouveau retro pieces.

In short, 3D holds a lot of promise, especially in the gaming market where the scene is being generated on the fly, but for general viewing I’m really not convinced that there’s a great need at the moment (sport may be an exception though) until the content production industry works out how to handle the artistic differences between 2D and 3D in order to get the most from both mediums.  There is some fantastic 3D content out there but not enough to warrant buying a 3D set specifically to see it.

I was interested to see that JetBrains, creators of the splendid IntelliJ IDEA IDE amongst other things, are working on their own “better-Java-than-Java” language, Kotlin.

Ostensibly this is to resolve some of the “issues” that the Java language has, whilst being simpler than the currently strongest competitor, Scala.  Kotlin is going to be supported as a first-class citizen in the IDEA IDE from around the end of 2011.

There’s a lot of talk on various fora about the need (or lack of) for introducing a new language when others like Scala, Groovy and Clojure are becoming more established.  This is similar to the fuss back in April 2011 when Gavin King “leaked” the news that Red Hat were working on their “Ceylon” JVM-hosted language.  The public launch of Ceylon is likely to be in the same sort of timescale.

Most of the chatter seems to be about the benefits around the features of the languages, semantic overhead of learning, efficiency of execution, dilution of Java skillsets, etc.  All are interesting and sound discussions.

What I’m not seeing is anyone saying about the pure commercial value of a company such as JetBrains or RedHat launching their languages focusing on their their own ecosystems.  I don’t believe these guys aren’t doing it for the love of the Java community. They’re doing it for cold, hard cash… any why not?

It’s interesting that JetBrains are saying that Kotlin is intended for “industrial use”.  By this I’m expecting the real value-add features (which must be compelling) to be supported in their paid-for “Ultimate” edition of IDEA rather than the free “Community” version and they’re targeting the “enterprise” market.

They would need to convince “corporate” development managers of the benefits of the Kotlin, disabuse them of the risks of selecting a single vendor language  and get it embedded into their companies order to monetise the investment in developing it.

That’s not to say they’re wrong to try – look at how successful VB6 was and that’s a single company language supported by a strong IDE!

They’ll have to be quick though.

Java 7 has just been released and Java 8 is due in late 2012.  Whilst J7 addresses quite a few “niggles” with Java (including reducing some noise around Generics and the equivalent of C#‘s “using” statement)  and add some good features into the JVM it’s not the complete “Project Coin” package.

The “big thing” being sold in all these new languages are Lambdas/Closures which won’t come until J8.  (You can see the split of features here.) This means that they’ll have 12 months to get some traction before the “official” Java language supports this.

Personally, I don’t believe that having closures in an OO imperative language is that important.  Useful, of course, but not the end of world if they’re lacking.  There didn’t seem to be much demand for them in Java before Ruby on Rails became popular in 2006 and there was an outbreak of scripting-envy across the Java community.

Of course, with the JVM bytecode instruction InvokeDynamic being added in Java 7 it means that dynamic language such as JRuby should become much more performant so, if you really want closures on the JVM why not just use that?

I think it’s disappointing that the official Java/JVM ecosystem has been on a bit of a hiatus for the last 5 years (Java 6 was released in 2006) but it’s kind of understandable given the situation with Sun and Oracle.  It’s also disappointing that Coin was split across two releases as this will delay adoption too and, in the Java world, allow third-parties to fill the perceived gap with semi-official solutions in the interim.

It will be interesting to see how this all plays out.  I do hope it doesn’t distract JetBrain or RedHat/JBoss from continuting to provide world-class platform support for the official Java stack on their products.

Read the details of the problem here

Summary

Exploring the shortest path from one corner of a cuboid to another.

Solution

It’s been a while since I’ve posted a Project Euler solution and the ones I have posted have been catch-ups. The other day I saw a specific search on this site for a solution to Problem 86 which I hadn’t yet completed so I thought I’d give it a go…

On first reading this seemed a little unclear but it comes apart quite easily with a little prodding and poking. Taking the example that Euler gives and flattening it makes it apparent what’s going on. The shortest path (SF) is the hypotenuse of a Pythagorean triangle with sides of { 6, 8 (3+5), 10 }.

pe0086_1

The diagram below shows the case of an M x M x M cuboid to demonstrate the three variants of the shortest path in a very obvious manner.

pe0086_2

So it’s just a case of checking for Pythagorean triangles with an adjacent side of length M, and up to 2M for the opposite side. Note that, due to this abstraction from the actual cuboid, it’s not necessary to deal with the other two sides individually – the opposite side length is taken as the total of the depth and height of the cuboid. Once a suitable triangle is found the number of solutions for it, taking into account the ways in which the opposite side length may be composed for any given potential cuboids, are added onto the running total.

def ( answer, solutions ) = [ 1, 0 ]

while (true) {
  for (i in 2..(2 * answer)) {
    def hyp = Math.sqrt(answer * answer + i * i)
    if (hyp == Math.floor(hyp)) {
      solutions += (i > answer + 1) ?
          (answer + answer + 2 - i).intdiv(2) : i.intdiv(2)
    }
  }
  if (solutions > 1e6) break
  answer++
}

The solution is quite succinct and, even though it’s simply a brute force algorithm, runs in around 1.31 seconds – so well within the 15 second target. Smarter algorithms might include changing this one to a bisection search of across the solution space, or maybe pre-generating Pythagorean triangles and driving directly off of those.

Conclusion

Groovy was perfectly fast enough for solving this problem, both in terms of coding and runtime. From a performance perspective it’s advisable to use the Math.floor() function rather than a straight cast to (int) and calling intdiv() is also a must compared to a straight division.

The Tau of Pi

April 14, 2011

I came across this interesting article – The Tau Manifesto.

It’s proposed that the stalwart constant of geometry – Pi (π) – be replaced by a new, more natural constant – Tau (τ) – which is simply Circumference/Radius and is so equivalent to 2π. They make quite a convincing argument as to why this should be the case.

Those multiples of 2π in equations always made me twitchy… and now I realise why!

Always makes me chuckle when some marketing guy talks, in early 2011, about how their software fully supports HTML5 when that set of APIs aren’t even due to meet a Candidate Recommendation stage until 2012 (some 4 YEARS late) and have a target of 2014 for completion. The Last Call for comments isn’t until May 2012 so it really seems like a very foolish brave statement to make.

http://www.bbc.co.uk/news/technology-12476859

At some point I’ll take a look to see exactly what subset of HTML5 APIs they’re currently supporting and at what specification snapshot level.