Over the years, I’ve spent quite a lot of time on calendars. It seems that most apps require some form of calendar functionality, and I’ve found it difficult to find or make one that does what I need it to, quickly and without costing too much.
The clever people at iGeek came up with a tidy solution a while ago, involving a device-specific table of calendar dates, refreshed whenever the user selects a new month. For the month view, the display is via 7 filtered portals, one for each day of the week, with a maximum of 6 records for each day. (Note that any changes to the calendar require work on each of the 7 portals, which is a nuisance, but which is a huge improvement on the 42 portals athat I’ve seen in other calendars!)
It worked really well in a LAN setup, but became a little slow over the internet, due largely to the handling of records each time the month changed, and the need for a ‘replace field contents’ with each refresh seemed especially slow. So, working on iGeek’s solid foundation, I converted the solution to use a ‘virtual list’ (i.e. with the data held in global variables). While I was at it, I also incorporated the ‘selector-connector’ method of linking tables (i.e. using a dedicated global field – called ‘XJOIN’ – to relate all the tables via a cross-join relationship). This is another technique which I picked up from my friends at iGeek, and one which I really like.
This resulted in a considerable performance improvement. At this stage, the calendar doesn’t do a great deal, apart from displaying a mini-calendar for the month. The user can scroll through previous and next months, and select a month and/or year. Clicking on a day performs a script which selects the date, and creates an ‘event’ for that date. The next version will enable the user to record details for the events, and there will be a week and day view, powered by the same virtual list.
I realise that I’m reinventing the wheel here – there are dozens of perfectly capable calendars out there – but as a learning exercise, it’s been very useful, and will ultimately be incorporated in my solution. So I thought I’d put it up for grabs in case anyone else felt that it could be useful.
The unlocked demo file is available here. It is, of course, offered with no guarantees or liability, but I would welcome comment or question.
The file comes complete with a virtual list table, VL_CalDays. Like any VL table, this has records of which the keys are sequentially numbered from 1 – this is the crucial part of the virtual list technique, as the record number will be used to populate the corresponding repetition within the global variables. If you need to re-build this table, there’s script in the file to enable you do so – ‘Create VL recs’.
If you’re new to virtual lists, you may want to read an article I wrote about the basics of virtual list a while ago, in ‘Virtual Lists – Let’s Start at the Very Beginning‘. (Many other articles about Virtual List are available – I find Kevin Frank’s stuff at Filemaker Hacks especially well written and useful).
The ExecuteSQL function in Filemaker can be really useful. That said, it has its drawbacks: it can be very slow (where many JOINs are involved), it’s very fiddly to build queries (with little help on offer as to where the problem lies if the query doesn’t work), and queries are prone to breaking when field or table names change.
I have no magic solutions to any of these issues, but I do have one suggestion to help with the ‘breaking’ thing, and a little bit with the ‘fiddliness’ problem.
Here’s a simple eSQL query:
This works fine, but if any of the field or table names (PPL, PPL::nameFirst, PPL::nameLast) change, the query will no longer work.
The solution is to put the various parameters into variables, as part of the ExecuteSQL calculation, like this:
At first glance, this looks unnecessarily complex, but stick with it!
The whole thing is wrapped up in one ‘Let’ statement. Part 1 of the ‘Let’ statement is where we define calculation variables with the fully qualified field names of the fields that we need to reference in the query, plus the argument to be used. (The tilde at the start of the variable names is just the prefix I use for calculation variables.) Note that the field name variables can’t be used in the query itself, because they have the table occurrence as part of them.
So, Part 2 of the variable declaration basically takes the fully qualified field names and extracts the field name itself (e.g. ‘nameFirst’, without ‘PPL::’). It does this by:
- Using the Substitute to convert ‘PPL::nameFirst’ to ‘PPL¶nameFirst’,
- And then separating the first and second values into the table name (‘PPL’) and fieldname (‘nameFirst’).
Note that these values have to be put in quotes, for use later in the query.
Part 3 assembles the query, also defined as a calculation variable, and using the variables defined in Part 2.
And finally, Part 4 defines one last variable, ~result, which is the result of the query.
This may seem like a lot of hard work, but I’m finding that once in the habit of using ExecusteSQL in this way, it’s actually easier and very logical. And, crucially, it makes your queries much more robust, in that if any of the field names change, the query is automatically updated because the fieldnames in the initial variable declaration (Part 1) come directly from the database. Unbreakable? Well, obviously not, but certainly less breakable.
(Note – this approach is not my invention, but has been adapted from solutions found on the Filemaker Technet.)
Well, I haven’t put anything on this blog for a very long time, but this seems a good time to start again, because I’ve realised that there’s a serious limitation to the technique described below, ‘Opening a Popover with a Script‘.
If you use the excellent WebDirect option to give your users access to the solution via a web browser, be warned that hidden popovers do not work. It seems that this is a known bug in Filemaker, which is good news in a way, as it may one day be fixed. But until then, I’m having to think of a workarounds.
In ascending order of fiddliness, this is what you can do:
Option 1 – Hide the button by making it tiny (1×1 pt), and with no line or fill. This makes it effectively invisible, but not ‘hidden’. The disadvantage is that, because it’s not hidden, the button is clickable. That said, it’s very unlikely that the user will click on such a tiny object, especially if you leave the ‘change to hand’ option unclecked.
Option 2 – Place the button behind another button. Then set the button on top to perform a ‘do nothing’ script step, e.g. ‘Resume script’. The button should have a fill and line the same colour as the background, no ‘change to hand’ option, and no behaviour set for when it’s active or in focus. One disadvantage of this is that it makes the whole assembly hard to spot in layout mode. So I’m tending to use the little yellow lozenge (hidden, obviously, via the Inspector) as an indicator that there’s an artificially hidden button in that space.
(Thanks to the good people on Filemaker Technet for steering me in the right direction.)
I really like popovers – they’re the most useful interface feature to have been introduced to Filemaker for ages, and being able to add an OnObjectEnter script trigger adds to their usefulness, enabling us to do some stuff each time the popover opens.
But OnObjectEnter is a “Post” trigger (i.e. it runs after the event has happened), and sometimes it’s nicer to do the processing before the popover opens. Initialising fields is a good example, because we don’t want the existing (“old”) field contents displayed, however briefly, before the fields are cleared.
One way around this is to have:
- An ordinary (non-popover) button, which is the button visible to the user, and which the user clicks.
- A popover button, hidden from the user. (To make it hidden, I use the method suggested by ‘bfroo’ in the comments – ‘hide object when 1’ – so that it’s always hidden in Browse mode, but visible in Layout mode.)
- An object name for the popover itself.
I tend to put the popover button next to the other button, but different in colour to make it obvious that it’s not a part of the user’s view of the interface, like this:
(i.e. the ‘Analyse’ and ‘Outcomes’ buttons are the clickable buttons, and the little yellow ones next to them are the popover buttons.)
When the user clicks on the (non-popover) button, a script is performed. This script:
- Performs any pre-processing required,
- Goes to the object name of the popover, which has the effect of opening the popover.
- Refreshes the window.
The disadvantage is that it’s obviously a bit more work to set it up in the first place, but the added flexibility that this technique provides, makes it well worthwhile.
In Part 1, I wrote about my first experiences with the ExecuteSQL function in Filemaker, looking at returning a basic list of field contents, passing arguments, and the use of simple joins. The function is proving useful, and I’m delving further into it, with help from others.
Here are are my latest discoveries. (It’s all rather random, i.e. I’m finding out more as and when I need it, so this may not appear to be a logical order in which to describe things.)
Putting the query in a variable
It seems to be common practice to put the query statement and its result into calculation variables. (I’ve yet to see the full benefits of this, but enough experts do it to make me think it’s good practice!)
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE nameLast = ‘Smith’ ” ; “” ; “” )
“SELECT nameFirst, nameLast
WHERE nameLast = ‘Smith'”;
$result = ExecuteSQL( $query ; “” ; “”)
Field and row separators
By default, if the function is returning more than one field (e.g. nameFirst and nameLast in the above example), these will be separated by a comma. You can put any other separator as the second parameter of the function, e.g.
(See the effect of this in the “Result” pane above.)
And if you want to separate rows with something other than the default carriage return, just put that in as the third parameter, e.g.
It won’t come as a great surprise that the ORDER BY clause in SQL enables you to sort the output. The order can be ASCending or DESCending, but if you want the result in ascending order, you don’t need to specify it, as it’s the default.
This one’s a little bit fiddly, and I needed the generous help of members of the Filemaker TechNet to get it working.
I needed to limit the scope of the query. I had already isolated a list of keys, via a combination of Filemaker find and eSQL query, I had hoped originally to base the next eSQL query on the table occurrence in which these keys were listed. Unfortunately, though, eSQL will always use the BASE TABLE of any table occurrence – it doesn’t recognise the filtering via relationships which are so useful elsewhere in Filemaker.
So I’ve got a list of keys of the records, in a global field, PPL::FILTERIDS. I want to query within this filtered subset – “within the group of records that I’ve already found, tell me how many are women, how many have disabilities, etc.”.
To start with, I’ve just done a query to give me the people’s names, like this:
So we’re saying give me the names of each person whose id appears IN the list of IDs. But we can’t just use the field itself, because that is a return-delimited list. The IN clause needs a comma-separated list enclosed in parens, and, furthermore, each value in the list needs to be in (single) quotes (if they are alphanumeric text values). So the setting of the ~idList includes replacing each CR with ‘,’. Then, in the query variable itself we add an opening paren plus ‘, and a closing ‘ plus paren. Fiddly, but quite portable once you’ve got it working the first time (I hope!).
Note that it’s a bit less fiddly with numeric strings, in that they don’t need the single quotes around each value.
With this as a basis, we can carry on with more complex querying. More about that in a later post.
(Note – this is an update of an older post about filtering portals via relationships.)
The Requirement: A quick, user-friendly “spotlight” style search option, to filter a portal of people’s names.
Demo file: QuickFindDemo.zip
Here’s how the finished interface looks:
Using the FM’s nice “River” theme, we have a QuickFind box in which the user enters a string, and, as they type, the list expands or contracts to show matching records. (Note that the “Search” button in the screenshot is not part of this technique – that opens a popover in which the user can specify a number of other criteria – show only people over a certain age, who have attended this month, etc.)
The input field is a global field within the table to be searched. In this example, it’s called PPL::FILTERINPUT.
When this field is empty, it looks like this:
The magnifying glass icon and the “QuickFind” text are hidden, via the very useful “hide object when” feature, the condition here being “not IsEmpty (PPL::FILTERINPUT)”.
The portal showing the list of people’s names, plus other information as required, is based on a self-join. I’ve called the table occurrence PPL_PPL~filter with the self-join relationship defined as:
In the PPL table, “id” is the primary key. So the portal will show anybody whose id appears in the global field PPL::FILTERIDS. (We can ignore the “isDead” thing, which just excludes dead people from the list.)
So, how does PPL::FILTERIDS get populated?
The population of the match field involves two scripts, “Initiate filter” and “Filter portal”. The second one, which does the searching, ONLY runs after a given pause between the user’s keystrokes. This is important, because you don’t want the searching script to run every time the user presses a key – it would be too slow. The duration of the delay that you allow depends mainly on the typing speed of the user – I’ve settled on 0.35 seconds.
So, the first script, “Initiate Filter”, runs EVERY time the user presses a key – it’s an OnObjectModify triggered script attached to PPL::FILTERINPUT. This is what it does:
The script is passed two parameters, one is the name of the field that we want to populate (i.e. PPL::FILTERIDS), and the other is the object name (not the field name) of the input field – I’ve called mine “PeopleNameFilter”.
These two parameters are stored as global variables, for use in the subsequent script. (Note that I use the GSP custom function to “get” these parameter values – it’s available here on Brian Dunning’s site).
We then use “Install OnTimer Script” to schedule the searching script every 0.35 seconds – this has the effect of allowing the user that amount of time to press another key – if s/he modifies the field again within 0.35 seconds, the script runs again, and the new OnTimer Script replaces the previous one.
When the user has finished typing (i.e. 0.35 seconds has elapsed), the “Filter Portal” script runs. It looks like this:
If the input field is empty, we just initialise the multi-key field. This will have the effect of showing no records in the portal. We then use “Install OnTimer Script” again, with a zero interval, to cancel the repetition of the current script.
If we have an input string, we use a fairly straightforward ExecuteSQL statement to find the matching ids. Here’s the eSQL statement:
ExecuteSQL(“SELECT id FROM PPL WHERE searchableData LIKE ? ” ; “” ; “” ; “%” & PPL::FILTERINPUT & “%” )
PPL::searchableData, the field searched by the ExecuteSQL statement, is a concatenated text field, including whatever pieces of data you want to search on. (I’ve experimented with making this a calculated field, but have found it much more efficient to have it as a plain text field, updated by a script whenever relevant data changes.) In this case, the field is updated as:
lower(nameFirst & “ “ & nameLast & “ “ & address1)
So, at the end of all this, we have PPL::FILTERIDS populated with the primary keys of all records where matching text has been found (anywhere within the searchableData field – it doesn’t have to be at the beginning), and the portal showing those records. If there are NO matching records, we say so, then cancel the OnTimer script (by installing another (blank) one), and go back to where we came from.
Note that the use of parameters is optional, but makes it possible to use it elsewhere, should we need other similar QuickFinds.
As ever, please comment if you can suggest a better way of doing this. None of it is really of my own invention, and is documented here to remind myself of how it works, but if it’s of use to you, so much the better!
The requirement: A set of mutually exclusive fields, to be stored as Booleans (i.e. 1 or 0, on or off).
Demo file: Mutually exclusive Booleans
Note – There may be several reasons why these must be separate Boolean fields, rather than one non-Boolean (which would obviously be easier to implement) – totalling and reporting, for example, can be much more straightforward with Booleans.
In a simple example (see demo file), we have 3 fields, isRed, isGreen, and isBlue. If any one of them is set to 1, the other two must be set to zero.
This is how I do this, following advice and tips from Technet members and others.
First of all, each field is shown as a checkbox set, using values from a value list called “true”. This VL actually only has one value in it, i.e. “1”. The field is sized on the layout to show only a single checkbox. Like this:
Then, each field is defined as a number field, with two auto-enter settings:
- Data – 0 or 1 (i.e. this determines the default setting),
- Calculated value – as shown below:
So, basically, this calc says:
- If the field currently being updated is isRed, set isRed to whatever the user has selected, i.e. 1 or not. But, because we want to store a zero to indicate the “off” state, use the GetAsBoolean function to achieve that.
- If the field currently being updated is isGreen, and if isGreen is “on”, set isRed to “off”.
- If the field currently being updated is isBlue, and if isBlue is “on”, set isRed to “off”.
Each of the other two fields have a similar auto-enter calc applied to them, like this:
Note that the confusingly-labelled setting “Do not replace existing value of field (if any)” is set to OFF.
There’s another post on this blog advocating the use of conditionally-formatted objects to indicate on/off states of Booleans, but I’m now beginning to prefer this approach. As ever, if you know of a better way to skin the cat, do let me know!