The ExecuteSQL function in Filemaker can be really useful. That said, it has its drawbacks: it can be very slow (where many JOINs are involved), it’s very fiddly to build queries (with little help on offer as to where the problem lies if the query doesn’t work), and queries are prone to breaking when field or table names change.
I have no magic solutions to any of these issues, but I do have one suggestion to help with the ‘breaking’ thing, and a little bit with the ‘fiddliness’ problem.
Here’s a simple eSQL query:
This works fine, but if any of the field or table names (PPL, PPL::nameFirst, PPL::nameLast) change, the query will no longer work.
The solution is to put the various parameters into variables, as part of the ExecuteSQL calculation, like this:
At first glance, this looks unnecessarily complex, but stick with it!
The whole thing is wrapped up in one ‘Let’ statement. Part 1 of the ‘Let’ statement is where we define calculation variables with the fully qualified field names of the fields that we need to reference in the query, plus the argument to be used. (The tilde at the start of the variable names is just the prefix I use for calculation variables.) Note that the field name variables can’t be used in the query itself, because they have the table occurrence as part of them.
So, Part 2 of the variable declaration basically takes the fully qualified field names and extracts the field name itself (e.g. ‘nameFirst’, without ‘PPL::’). It does this by:
- Using the Substitute to convert ‘PPL::nameFirst’ to ‘PPL¶nameFirst’,
- And then separating the first and second values into the table name (‘PPL’) and fieldname (‘nameFirst’).
Note that these values have to be put in quotes, for use later in the query.
Part 3 assembles the query, also defined as a calculation variable, and using the variables defined in Part 2.
And finally, Part 4 defines one last variable, ~result, which is the result of the query.
This may seem like a lot of hard work, but I’m finding that once in the habit of using ExecusteSQL in this way, it’s actually easier and very logical. And, crucially, it makes your queries much more robust, in that if any of the field names change, the query is automatically updated because the fieldnames in the initial variable declaration (Part 1) come directly from the database. Unbreakable? Well, obviously not, but certainly less breakable.
(Note – this approach is not my invention, but has been adapted from solutions found on the Filemaker Technet.)
In Part 1, I wrote about my first experiences with the ExecuteSQL function in Filemaker, looking at returning a basic list of field contents, passing arguments, and the use of simple joins. The function is proving useful, and I’m delving further into it, with help from others.
Here are are my latest discoveries. (It’s all rather random, i.e. I’m finding out more as and when I need it, so this may not appear to be a logical order in which to describe things.)
Putting the query in a variable
It seems to be common practice to put the query statement and its result into calculation variables. (I’ve yet to see the full benefits of this, but enough experts do it to make me think it’s good practice!)
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE nameLast = ‘Smith’ ” ; “” ; “” )
“SELECT nameFirst, nameLast
WHERE nameLast = ‘Smith'”;
$result = ExecuteSQL( $query ; “” ; “”)
Field and row separators
By default, if the function is returning more than one field (e.g. nameFirst and nameLast in the above example), these will be separated by a comma. You can put any other separator as the second parameter of the function, e.g.
(See the effect of this in the “Result” pane above.)
And if you want to separate rows with something other than the default carriage return, just put that in as the third parameter, e.g.
It won’t come as a great surprise that the ORDER BY clause in SQL enables you to sort the output. The order can be ASCending or DESCending, but if you want the result in ascending order, you don’t need to specify it, as it’s the default.
This one’s a little bit fiddly, and I needed the generous help of members of the Filemaker TechNet to get it working.
I needed to limit the scope of the query. I had already isolated a list of keys, via a combination of Filemaker find and eSQL query, I had hoped originally to base the next eSQL query on the table occurrence in which these keys were listed. Unfortunately, though, eSQL will always use the BASE TABLE of any table occurrence – it doesn’t recognise the filtering via relationships which are so useful elsewhere in Filemaker.
So I’ve got a list of keys of the records, in a global field, PPL::FILTERIDS. I want to query within this filtered subset – “within the group of records that I’ve already found, tell me how many are women, how many have disabilities, etc.”.
To start with, I’ve just done a query to give me the people’s names, like this:
So we’re saying give me the names of each person whose id appears IN the list of IDs. But we can’t just use the field itself, because that is a return-delimited list. The IN clause needs a comma-separated list enclosed in parens, and, furthermore, each value in the list needs to be in (single) quotes (if they are alphanumeric text values). So the setting of the ~idList includes replacing each CR with ‘,’. Then, in the query variable itself we add an opening paren plus ‘, and a closing ‘ plus paren. Fiddly, but quite portable once you’ve got it working the first time (I hope!).
Note that it’s a bit less fiddly with numeric strings, in that they don’t need the single quotes around each value.
With this as a basis, we can carry on with more complex querying. More about that in a later post.
(Note – this is an update of an older post about filtering portals via relationships.)
The Requirement: A quick, user-friendly “spotlight” style search option, to filter a portal of people’s names.
Demo file: QuickFindDemo.zip
Here’s how the finished interface looks:
Using the FM’s nice “River” theme, we have a QuickFind box in which the user enters a string, and, as they type, the list expands or contracts to show matching records. (Note that the “Search” button in the screenshot is not part of this technique – that opens a popover in which the user can specify a number of other criteria – show only people over a certain age, who have attended this month, etc.)
The input field is a global field within the table to be searched. In this example, it’s called PPL::FILTERINPUT.
When this field is empty, it looks like this:
The magnifying glass icon and the “QuickFind” text are hidden, via the very useful “hide object when” feature, the condition here being “not IsEmpty (PPL::FILTERINPUT)”.
The portal showing the list of people’s names, plus other information as required, is based on a self-join. I’ve called the table occurrence PPL_PPL~filter with the self-join relationship defined as:
In the PPL table, “id” is the primary key. So the portal will show anybody whose id appears in the global field PPL::FILTERIDS. (We can ignore the “isDead” thing, which just excludes dead people from the list.)
So, how does PPL::FILTERIDS get populated?
The population of the match field involves two scripts, “Initiate filter” and “Filter portal”. The second one, which does the searching, ONLY runs after a given pause between the user’s keystrokes. This is important, because you don’t want the searching script to run every time the user presses a key – it would be too slow. The duration of the delay that you allow depends mainly on the typing speed of the user – I’ve settled on 0.35 seconds.
So, the first script, “Initiate Filter”, runs EVERY time the user presses a key – it’s an OnObjectModify triggered script attached to PPL::FILTERINPUT. This is what it does:
The script is passed two parameters, one is the name of the field that we want to populate (i.e. PPL::FILTERIDS), and the other is the object name (not the field name) of the input field – I’ve called mine “PeopleNameFilter”.
These two parameters are stored as global variables, for use in the subsequent script. (Note that I use the GSP custom function to “get” these parameter values – it’s available here on Brian Dunning’s site).
We then use “Install OnTimer Script” to schedule the searching script every 0.35 seconds – this has the effect of allowing the user that amount of time to press another key – if s/he modifies the field again within 0.35 seconds, the script runs again, and the new OnTimer Script replaces the previous one.
When the user has finished typing (i.e. 0.35 seconds has elapsed), the “Filter Portal” script runs. It looks like this:
If the input field is empty, we just initialise the multi-key field. This will have the effect of showing no records in the portal. We then use “Install OnTimer Script” again, with a zero interval, to cancel the repetition of the current script.
If we have an input string, we use a fairly straightforward ExecuteSQL statement to find the matching ids. Here’s the eSQL statement:
ExecuteSQL(“SELECT id FROM PPL WHERE searchableData LIKE ? ” ; “” ; “” ; “%” & PPL::FILTERINPUT & “%” )
PPL::searchableData, the field searched by the ExecuteSQL statement, is a concatenated text field, including whatever pieces of data you want to search on. (I’ve experimented with making this a calculated field, but have found it much more efficient to have it as a plain text field, updated by a script whenever relevant data changes.) In this case, the field is updated as:
lower(nameFirst & “ “ & nameLast & “ “ & address1)
So, at the end of all this, we have PPL::FILTERIDS populated with the primary keys of all records where matching text has been found (anywhere within the searchableData field – it doesn’t have to be at the beginning), and the portal showing those records. If there are NO matching records, we say so, then cancel the OnTimer script (by installing another (blank) one), and go back to where we came from.
Note that the use of parameters is optional, but makes it possible to use it elsewhere, should we need other similar QuickFinds.
As ever, please comment if you can suggest a better way of doing this. None of it is really of my own invention, and is documented here to remind myself of how it works, but if it’s of use to you, so much the better!
I’ve been getting to grips with the ExecuteSQL function. To say that I’ve been enjoying it would be partly true – but it can be a little tricky, not without its frustrations, and disappointing in its performance. BUT, I’m beginning to see just how powerful and useful it could be. There are a couple of excellent resources that I’ve found very useful: one is Filemaker Hacks – Kevin Frank’s articles about ExecuteSQL on his blog are excellent. Kevin writes so clearly, and presents the material so well, that even when he’s repeating stuff that I already know (which isn’t often!), it’s a delight to read his articles. Of particular relevance here are:
Also, Beverley Voth’s “The Missing Filemaker 12 ExcecuteSQL Reference” – there’s a link to this on FilemakerHacks too – here.
Here are some examples of expressions using the function:
Basic List of Field Contents
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE nameLast = ‘Smith’ ” ; “” ; “” )
This will give a list of the first names of people whose last name is Smith. A couple of things that tripped me up, even at this basic stage:
- “PPL” is a table occurrence, not a base table name.
- ‘Smith’ is case sensitive.
To get around the case sensitivity issue, I’ve started changing all text to lower case within the expression, e.g.
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE LOWER(nameLast) = ‘smith’ ” ; “” ; “” )
The next step is to use the query term (“smith”) as an argument, rather than including it in the SELECT statement, e.g.
ExecuteSQL ( “SELECT count (*) FROM PPL WHERE LOWER(nameLast) = ? ” ; “” ; “” ; “smith”)
So now we start to see how ExecuteSQL can take the place of FM Finds. Want to know how many Sue Smiths you’ve got?
ExecuteSQL ( “SELECT count (*) FROM PPL WHERE LOWER(nameLast) = ? and LOWER(nameFirst) = ? ” ; “” ; “” ; “smith” ; “sue”)
And, of course, the arguments can use fields instead of the literal values in the example.
There’s been a lot of excitement about the fact that ExecuteSQL can be used for querying related data, without the need for that relationship to be defined via Filemaker’s relationship graph. This can mean simplification of the graph – often quite complex relationships have to be drawn on the graph just for the sake of a simple calculation of related data.
So, moving on to a basic JOIN, i.e. relating 2 tables and getting related data without (necessarily) having a relationship in Filemaker, say we want to know how many Sessions there are for a given Programme (where a Programme can have zero or many Sessions):
ExecuteSQL(“SELECT COUNT (*) FROM SSN AS s JOIN PGM AS p ON s.id_PGM = p.id WHERE p.id = ‘PGM00002866’ ” ; “” ; “” )
Note how we allocate an alias of “s” to use when we refer to the SSN table occurrence later in the expression, and “p” for the PGM table occurrence.
The “AS” in the previous example is optional, and the whole thing can be rearranged to be a bit more readable, e.g.
FROM SSN s, PGM p
WHERE s.id_PGM = p.id
AND p.id = ‘PGM00002866’ ” ; “” ; “” )
And now for a 3 level “find”:
ExecuteSQL(“SELECT S.id FROM SSN s, PGM p, PTI t
WHERE s.id_PGM = p.id
AND p.id_PTI = t.id
AND LOWER(t.title) = s.FILTERINPUT ” ; “” ; “” )
BUT (and it’s a big, disappointing BUT), ExecuteSQL can be very slow. In the last example above, looking for matching text in the PTI table, the calc takes about 6-8 seconds to query a set of about 24,000 records. That may not be a big deal when running a report, or an off-line process, but on an interactive screen (I was trying to use it for a “spotlight search” function), it’s clearly not good enough. In this situation, it looks like you’re better off “denormalising” the data (i.e. repeating, in the parent table, the field you want to query).