The ExecuteSQL function in Filemaker can be really useful. That said, it has its drawbacks: it can be very slow (where many JOINs are involved), it’s very fiddly to build queries (with little help on offer as to where the problem lies if the query doesn’t work), and queries are prone to breaking when field or table names change.
I have no magic solutions to any of these issues, but I do have one suggestion to help with the ‘breaking’ thing, and a little bit with the ‘fiddliness’ problem.
Here’s a simple eSQL query:
This works fine, but if any of the field or table names (PPL, PPL::nameFirst, PPL::nameLast) change, the query will no longer work.
The solution is to put the various parameters into variables, as part of the ExecuteSQL calculation, like this:
At first glance, this looks unnecessarily complex, but stick with it!
The whole thing is wrapped up in one ‘Let’ statement. Part 1 of the ‘Let’ statement is where we define calculation variables with the fully qualified field names of the fields that we need to reference in the query, plus the argument to be used. (The tilde at the start of the variable names is just the prefix I use for calculation variables.) Note that the field name variables can’t be used in the query itself, because they have the table occurrence as part of them.
So, Part 2 of the variable declaration basically takes the fully qualified field names and extracts the field name itself (e.g. ‘nameFirst’, without ‘PPL::’). It does this by:
- Using the Substitute to convert ‘PPL::nameFirst’ to ‘PPL¶nameFirst’,
- And then separating the first and second values into the table name (‘PPL’) and fieldname (‘nameFirst’).
Note that these values have to be put in quotes, for use later in the query.
Part 3 assembles the query, also defined as a calculation variable, and using the variables defined in Part 2.
And finally, Part 4 defines one last variable, ~result, which is the result of the query.
This may seem like a lot of hard work, but I’m finding that once in the habit of using ExecusteSQL in this way, it’s actually easier and very logical. And, crucially, it makes your queries much more robust, in that if any of the field names change, the query is automatically updated because the fieldnames in the initial variable declaration (Part 1) come directly from the database. Unbreakable? Well, obviously not, but certainly less breakable.
(Note – this approach is not my invention, but has been adapted from solutions found on the Filemaker Technet.)
I’ve been getting to grips with the ExecuteSQL function. To say that I’ve been enjoying it would be partly true – but it can be a little tricky, not without its frustrations, and disappointing in its performance. BUT, I’m beginning to see just how powerful and useful it could be. There are a couple of excellent resources that I’ve found very useful: one is Filemaker Hacks – Kevin Frank’s articles about ExecuteSQL on his blog are excellent. Kevin writes so clearly, and presents the material so well, that even when he’s repeating stuff that I already know (which isn’t often!), it’s a delight to read his articles. Of particular relevance here are:
Also, Beverley Voth’s “The Missing Filemaker 12 ExcecuteSQL Reference” – there’s a link to this on FilemakerHacks too – here.
Here are some examples of expressions using the function:
Basic List of Field Contents
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE nameLast = ‘Smith’ ” ; “” ; “” )
This will give a list of the first names of people whose last name is Smith. A couple of things that tripped me up, even at this basic stage:
- “PPL” is a table occurrence, not a base table name.
- ‘Smith’ is case sensitive.
To get around the case sensitivity issue, I’ve started changing all text to lower case within the expression, e.g.
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE LOWER(nameLast) = ‘smith’ ” ; “” ; “” )
The next step is to use the query term (“smith”) as an argument, rather than including it in the SELECT statement, e.g.
ExecuteSQL ( “SELECT count (*) FROM PPL WHERE LOWER(nameLast) = ? ” ; “” ; “” ; “smith”)
So now we start to see how ExecuteSQL can take the place of FM Finds. Want to know how many Sue Smiths you’ve got?
ExecuteSQL ( “SELECT count (*) FROM PPL WHERE LOWER(nameLast) = ? and LOWER(nameFirst) = ? ” ; “” ; “” ; “smith” ; “sue”)
And, of course, the arguments can use fields instead of the literal values in the example.
There’s been a lot of excitement about the fact that ExecuteSQL can be used for querying related data, without the need for that relationship to be defined via Filemaker’s relationship graph. This can mean simplification of the graph – often quite complex relationships have to be drawn on the graph just for the sake of a simple calculation of related data.
So, moving on to a basic JOIN, i.e. relating 2 tables and getting related data without (necessarily) having a relationship in Filemaker, say we want to know how many Sessions there are for a given Programme (where a Programme can have zero or many Sessions):
ExecuteSQL(“SELECT COUNT (*) FROM SSN AS s JOIN PGM AS p ON s.id_PGM = p.id WHERE p.id = ‘PGM00002866’ ” ; “” ; “” )
Note how we allocate an alias of “s” to use when we refer to the SSN table occurrence later in the expression, and “p” for the PGM table occurrence.
The “AS” in the previous example is optional, and the whole thing can be rearranged to be a bit more readable, e.g.
FROM SSN s, PGM p
WHERE s.id_PGM = p.id
AND p.id = ‘PGM00002866’ ” ; “” ; “” )
And now for a 3 level “find”:
ExecuteSQL(“SELECT S.id FROM SSN s, PGM p, PTI t
WHERE s.id_PGM = p.id
AND p.id_PTI = t.id
AND LOWER(t.title) = s.FILTERINPUT ” ; “” ; “” )
BUT (and it’s a big, disappointing BUT), ExecuteSQL can be very slow. In the last example above, looking for matching text in the PTI table, the calc takes about 6-8 seconds to query a set of about 24,000 records. That may not be a big deal when running a report, or an off-line process, but on an interactive screen (I was trying to use it for a “spotlight search” function), it’s clearly not good enough. In this situation, it looks like you’re better off “denormalising” the data (i.e. repeating, in the parent table, the field you want to query).
A recent post on the TechNet forum, plus a succession of typically helpful answers, provided a very useful tutorial in how best to use the Case function to calculate a marked up cost, with the markup being based on the value of Cost.
Here’s the original formula. “Why won’t it work?” was the cry for help:
Cost < “5” ; Cost * “4” ;
Cost > “5.01” and Cost < “15” ; Cost * “3.5” ;
Cost > “14.99” and Cost < “25” ; Cost * “3” ;
Cost > “24.99” and Cost < “40” ; Cost * “2.75” ;
Cost > “39.99” and Cost < “60” ; Cost * “2.5” ;
Cost > “59.99” and Cost < “100” ; Cost * “2.25” ;
Cost > “99.99” and Cost < “200” ; Cost * “2” ;
Cost > “199”; Cost * “1.6”)
Well, with the field “Cost” defined as a number, and the calculation result defined as a number, the first step is to remove the quotes, which makes Filemaker see the numeric values as numbers, rather than as text.
Cost < 5 ; Cost *4 ;
Cost > 5.01 and Cost < 15 ; Cost * 3.5 ;
Cost > 14.99 and Cost < 25 ; Cost *3 ;
Cost > 24.99 and Cost < 40 ; Cost * 2.75 ;
Cost > 39.99 and Cost < 60 ; Cost * 2.5 ;
Cost > 59.99 and Cost < 100 ; Cost * 2.25 ;
Cost > 99.99 and Cost < 200 ; Cost * 2 ;
Cost > 199; Cost * 1.6)
This will now work, but it’s making much harder work of the calc than it needs to. Filemaker only works through the Case statement until it meets a true condition, then it bails out – there’s therefore no need for all of these > and < evaluations.
So, enter V3:
Cost < 5 ; Cost * 4 ;
Cost < 15 ; Cost * 3.5 ;
Cost < 25 ; Cost * 3 ;
Cost < 40 ; Cost * 2.75 ;
Cost < 60 ; Cost * 2.5 ;
Cost < 100 ; Cost * 2.25 ;
Cost < 200 ; Cost * 2 ;
Cost * 1.6)
(Note the final default value, 1.6, which applies to any value of Cost equal to or greater than 200 – no need for an evaluation there; if we get that far, that’s the value to use.)
The next efficiency is to use calculation variables, for the value of Cost, and the markup to be applied.
So, finally, V4:
Let ( [
$Cost = Cost ;
$Markup = Case (
$Cost < 5 ; 4 ;
$Cost < 15 ; 3.5 ;
$Cost < 25 ; 3 ;
$Cost < 40 ; 2.75 ;
$Cost < 60 ; 2.5 ;
$Cost < 100 ; 2.25 ;
$Cost < 200 ; 2 ;1.6)];
$Cost * $Markup)
But then came another interesting point, i.e. whether the Cost field itself is the right place for the calculation to happen. In the case of a “net price” such as this, which may need to be overridden, the better way is to have a separate Markup field, defined as an auto-enter calc, with the initial value of the markup amount calculated as above. The (new) Price field is then defined as a simple calc field, i.e. Price = Markup * Cost. On the Markup fields, the option “Do not replace existing value of field (if any)” should be ticked, to prevent historical and “override” data being replaced.
For a more comprehensive look at calc and auto-enter fields, see here. Thanks, as ever, to the Technet contributors who combine to make it such a great resource.
Having said in the previous post that I’m trying to use FileMaker’s own feature set wherever possible, there are some areas where it just ain’t good enough.
One such situation is in the formatting of text data entered by the user, especially name and address details. For instance, if you want all names to held in “proper” case (e.g. “Jane Smythe”, rather than “JANE smythe”, “jane Smythe” or any other permutation), the “Proper” calculation function is available – you just need to make your field an auto-enter calc, with the calculation defined as “Proper (Self)“.
But this is very limited, and can actually create mistakes in data that has been correctly entered. For example, if your user has carefully entered “John O’Shea”, she’ll be cheesed off when FileMaker changes it to “John O’shea”. Likewise “Mary Macdonald”, and “Bbc”.
For a more thorough treatment, I’ve had to look beyond FileMaker to a couple of Custom Functions, both found on Brian Dunning’s excellent site. The CFs are:
- Title, by Howard Schlossberg, which does a great job of putting capital letters where they should be, leaving them as caps where they should be, and making them lower case where appropriate. It’s also quite adaptable, so you can add exceptions to be treated differently, etc. It looks like Howard created it to format titles (of albums, books, etc.) properly, but it works just as well on names of people, organisations, etc.
- SuperTrim, by Debi Fuchs, which removes leading and trailing spaces, tabs and carriage returns from the entered text. Again, like FM’s native function (in this case, “Trim”), but better.
So, once the CFs are added to the application, it’s a straightforward job to define all appropriate text fields as auto-enter calcs with this calculation behind them:
SuperTrim (Title ( Self; “”))
I’ve rather avoided the Let function until now – until you use it, it seems to add a layer of complexity, and, as a novice, I’ve always liked keeping things simple.
But now I’m beginning to realise that simplicity is exactly what the Let statement offers. Not only does it enable you to make calculations more readable, but it greatly increases the efficiency of the calculation.
Here’s a good example.
Say I have a field called BKG::BKG_Cost, which is the cost of a Booking. The cost of the Booking is the total of the costs of the Resource Bookings within the Booking. The cost depends on whether the date of the first Session has passed – if the deadline has passed, we increase the cost by 15%.
So, one way of defining the field would be like this:
But if we use a Let statement to define the total booking cost, we end up with this:
The main gain here is that we reference the field PGM to SSN to RBK::RBK_Cost only once, and do the Sum only once, i.e. when defining the calculation variable “BookingTot”. In the first version, to apply the increase, we do that twice. Using the calculation variable also makes the whole thing more readable. And the more variables you use, the more readable it becomes. Here is is again, with a couple more variables (note the need, when defining more than one variable, to enclose the variable definitions in square brackets):