Using eSQL to filter a “QuickFind” portal

(Note – this is an update of an older post about filtering portals via relationships.)

The Requirement: A quick, user-friendly “spotlight” style search option, to filter a portal of people’s names.

Demo file: QuickFindDemo.zip – downloadable from the “Box” over on the right.

The Interface

Here’s how the finished interface looks:

qf1

Using the FM’s nice “River” theme, we have a QuickFind box in which the user enters a string, and, as they type, the list expands or contracts to show matching records.  (Note that the “Search” button in the screenshot is not part of this technique – that opens a popover in which the user can specify a number of other criteria – show only people over a certain age, who have attended this month, etc.)

The input field is a global field within the table to be searched.  In this example, it’s called PPL::FILTERINPUT.

When this field is empty, it looks like this:

qf2

The magnifying glass icon and the “QuickFind” text are hidden, via the very useful “hide object when” feature, the condition here being “not IsEmpty (PPL::FILTERINPUT)”.

The Relationship

The portal showing the list of people’s names, plus other information as required, is based on a self-join.  I’ve called the table occurrence PPL_PPL~filter with the self-join relationship defined as:

qf3

In the PPL table, “id” is the primary key.  So the portal will show anybody whose id appears in the global field PPL::FILTERIDS.  (We can ignore the “isDead” thing, which just excludes dead people from the list.)

So, how does PPL::FILTERIDS get populated?

Populating FILTERIDS

The population of the match field involves two scripts, “Initiate filter” and “Filter portal”.  The second one, which does the searching, ONLY runs after a given pause between the user’s keystrokes.  This is important, because you don’t want the searching script to run every time the user presses a key – it would be too slow.  The duration of the delay that you allow depends mainly on the typing speed of the user – I’ve settled on 0.35 seconds.

So, the first script, “Initiate Filter”, runs EVERY time the user presses a key – it’s an OnObjectModify triggered script attached to PPL::FILTERINPUT.  This is what it does:

qf4The script is passed two parameters, one is the name of the field that we want to populate (i.e. PPL::FILTERIDS), and the other is the object name (not the field name) of the input field – I’ve called mine “PeopleNameFilter”.

These two parameters are stored as global variables, for use in the subsequent script. (Note that I use the GSP custom function to “get” these parameter values – it’s available here on Brian Dunning’s site).

We then use “Install OnTimer Script” to schedule the searching script every 0.35 seconds – this has the effect of allowing the user that amount of time to press another key – if s/he modifies the field again within 0.35 seconds,  the script runs again, and the new OnTimer Script replaces the previous one.

When the user has finished typing (i.e. 0.35 seconds has elapsed), the “Filter Portal” script runs.  It looks like this:

qf6

If the input field is empty, we just initialise the multi-key field.  This will have the effect of showing no records in the portal.  We then use “Install OnTimer Script” again, with a zero interval, to cancel the repetition of the current script.

If we have an input string, we use a fairly straightforward ExecuteSQL statement to find the matching ids.  Here’s the eSQL statement:

ExecuteSQL(“SELECT id FROM PPL WHERE searchableData LIKE ? ” ; “” ; “” ; “%” & PPL::FILTERINPUT & “%” )

PPL::searchableData, the field searched by the ExecuteSQL statement, is a concatenated text field, including whatever pieces of data you want to search on.  (I’ve experimented with making this a calculated field, but have found it much more efficient to have it as a plain text field, updated by a script whenever relevant data changes.)  In this case, the field is updated as:

lower(nameFirst & “ “ & nameLast & “ “ & address1)

So, at the end of all this, we have PPL::FILTERIDS populated with the primary keys of all records where matching text has been found (anywhere within the searchableData field – it doesn’t have to be at the beginning), and the portal showing those records.  If there are NO matching records, we say so, then cancel the OnTimer script (by installing another (blank) one), and go back to where we came from.

Note that the use of parameters is optional, but makes it possible to use it elsewhere, should we need other similar QuickFinds.

As ever, please comment if you can suggest a better way of doing this.  None of it is really of my own invention, and is documented here to remind myself of how it works, but if it’s of use to you, so much the better!

Advertisements

How to go to a specific portal row

The requirement:   In a portal, you need to be able to highlight a given record, and ensure that the portal row is visible.  (For example, a portal contains 20 records, but displays only 3.  You know which record you’re interested in, and you have the id of the record stashed in a global field, but the record may be out of sight when you display the portal.)

I devised a functional, but clunky, way of achieving this, then asked for advice on the Filemaker Technet.  I got 2 really good suggestions.  The first, from David Jondreau, suggested this:

portal_row_highlight1

So, the portal shows sessions (SSN) within a programme (PGM), and the id of the session that we’re interested in is held in the global field PGM::CURRENTSESSIONID.  The loop just goes through the portal rows until it hits that record.  “Go to portal row” ensures that it’s highlighted and visible.

The second suggestion, from Mike Beargie, uses a custom function called “ListIndex” (available from Brian Dunning’s site).  It’s a really concise one-liner:

portal_row_highlight2

The ListIndex CF just finds the position of the global id within the list of related SSN rec ids, and goes to that portal row, all in one script step.  Really neat, but, as Mike pointed out, it relies on the portal not being filtered.


Efficient Refresh With Cartesian Join

This tip was first brought to my attention by Matt Petrowsky on his excellent magazine site, but he credits Daniel of Digital Fusion in New Zealand with having discovered what is, essentially, a very simple way of refreshing portals without using the option to “Flush cache to disk”.  Daniel explains the technique very well here, and explains why unnecessary flushing of the cache is such a no-no, especially in a WAN setup.

He calls this the “Cartesian Refresh”.  Basically, you just add a Cartesian element to any relationship which is giving refresh issues.  Setting the global field on one side of the relationship forces the relationship to be evaluated, without having any other effect on the relationship.  So it’s as if you’ve done a flush, and the portal relying on the relationship gets refreshed, but the effect on system performance is minimal.

Here’s a simple example.  In the previous post, I wrote about  creating records using a “magic key” as part of the create relationship.  In this modified version of the script, I’m using a Custom Dialog to collect a value for the Description field:

This works fine, but only because of that last “Refresh Window”, which I now realise is dodgy, for all the reasons explained by Daniel.  Without that step, the portal which displays the related CUP records doesn’t show the newly created record (even though it’s there).

So, enter the Cartesian Refresh.  All we need is an additional match criterion for the relationship, relating any 2 fields via a cross-product, or Cartesian join, like this:

So the relationship is now basically saying, “Give me everything” AND “Give me only the records where the CHG_ID matches” – i.e. the Cartesian bit actually has no effect on the relationship OTHER than to force it to be refreshed.

So now we can get rid of the Refresh Window script step, and all the performance drawbacks that come with it.


How To Sort Portals by Clicking on Column Labels

UPDATE – the technique described below is a good one, and works fine, but Kevin Frank has described a more elegant and portable way of doing it here.  The concept is essentially the same, i.e. having the portal permanently sorted on the same two fields, but Kevin makes it much more straightforward.  Note, though, that his solution relies on global variables, which won’t work in a separated solution – in this case, you need to use global fields instead.  He explains this in his own post.

Anyway, here’s my original post, for posterity. But do it Kevin’s way!


This is a handy technique for doing quite elegant sorting on a variety of column headings.  The behaviour is the standard “toggling”, i.e. click once to sort on that column in ascending order, click again to sort descending.  The nice thing is that it only uses 2 sort fields, the contents of which are set by a simple script, which makes it very portable.

So the portal is permanently sorted on 2 fields, “ascending” and “descending”.  The one you’re not sorting on always contains “1”, to ensure that it doesn’t appear to affect the sorting of the portal.

The other component is a global field to hold the current sorting selection.  This is used for 2 things – i.e. toggling the preference (e.g. if it’s set to “ascending name order”, we’ll want to reverse that to “descending name order”, and vice versa), and driving the calculation on which the sorting is based.

Step 1 – Create “Current Sort Order” Global Field

In the appropriate “globals” table (INT in this example), add a field to contain the current sort order, e.g. INT_g_SortPref.  It’s just a global text field – you could use a global variable, but not if you are using Data Separation (global variables don’t span files).

Step 2 – Create Sort Fields

In the source table (on which the portal is based), add 2 fields to define the sort order, ascending or descending, named (e.g.) PGM_i_SortAscending and PGM_i_SortDescending.

The fields must be:

  • Calc
  • Unstored
  • Always evaluate
  • Result – Text

The calc should be something like this:

PGM_i_SortDescending will be similar, but with the field names associated with the descending settings, i.e.

This is what happens when the user clicks on a column heading:

  1. A script is performed, to set the global INT_g_SortPref field to the value passed to the script as a parameter.  In this example, the parameter will be “PGM_Date_a”, or “PGM_Date_d”, etc.
  2. Because the calc of PGM_i_SortDescending and PGM_i_SortAscending depends on INT_g_SortPref, the updating of  INT_g_SortPref causes the calc in PGM_i_SortDescending and PGM_i_SortAscending to be re-evaluated.  So, for example, if user has opted for “Date descending”, INT_g_SortPref gets set to “PGM_Date_d”, which causes PGM_i_SortAscending to be set to “1” in every PGM record, and PGM_i_SortDescending to be set to the value in the specified date field in each PGM record.  (NB – these sort fields are not globals, therefore likely to have a different value in each record.)

So in the in the calc, there’s a pair of possible values for each sortable column.  In the SortAscending calc, all descending fields are set to “1”, and the ascending fields are set to the field contents of the field on which we want to sort.  (Note the need to use “GetAsNumber” for  date fields – this is because the result of the calc is text, which will not sort dates correctly otherwise.)

Step 3 – Sort the Portal

Enable sorting for the portal, and specify BOTH sort fields, as shown below:

i.e. the portal is permanently sorted on both fields.  This is why we need to put “1” in the descending sort field when we want to sort in ascending order, i.e. PGM_i_SortDescending makes no difference to the sort.

Step 4 – Create the Sorting Script

It’s not really a sorting script at all – it just stores the sort order selected, which has been passed as the script parameter.  The sorting itself is done automatically by Filemaker’s own “portal sort” functionality, based on the contents of PGM_i_SortDescending and PGM_i_SortAscending.  The trick is in setting those sorting fields according to the column label clicked by the user.  (Note – I’ve used the “GSP” custom function here (from the Brian Dunning site), but “Get(ScriptParameter)” will do fine.)

Step 4 – Attach the Script to Buttons in Column Headings

For each column heading, create a button which calls the script with the sort order as a parameter.

Step 5 – Add Sort Order Indicators

There needs to be an indicator for each sortable field.  The easiest way is just to have a pair of conditionally formatted text objects at the top of the column,  i.e. ▲ and ▼.  The conditional format formula is (e.g.) INT::INT_g_SortPref  ≠ “PGM_date_a”, text colour is white (i.e. only show the symbol when the relevant sort order is in effect).


Filtering Portals, Method 2 – Using a Relationship for Filtering

Using Filemaker’s native portal filtering is a great way of narrowing down a portal list, as long as there are fewer than a couple of hundred records involved.  (See this post for more details.)

But where there are more records involved, it’s better to use a relationship-driven filter.  I came upon the technique explained here on the website of Nightwing Enterprises, and I’ve included here by way of documentation for me, and in no sense to suggest that it was my idea!

In “Focus” (the system that I look after), this arises where we are potentially listing ALL people in the portal (rather than just those who meet certain criteria, enrolled on a certain activity, etc.).

So, for example, on the Main Menu we have a search box to enable users to find the person they are interested in from a potential several thousand, by entering part of their first name or surname.  To achieve this, we have:

  • A Global filter field, e.g. INT_g_Filter.  This is a text field, which will hold the search string entered by the user.
  • A corresponding “multi-key” field, e.g. INT_g_Mkey.  This is also a text field, which will be populated by a triggered script when INT_g_Filter is modified.
  • A search field in the People table, a text calc, defined as:

    (Obviously, if you only want to search on the surname field, that’s all you would put in the calc.  Similarly, if you want more fields to be used, they can be added on subsequent lines, with carriage returns separating them.)

  • A relationship between the table where the multi-key field is held (say, “Interface”), and the table holding the actual data (say, “People”).  The relationship is defined as shown below:

And now, the clever bit, i.e. the triggered script, run “OnObjectModify” of INT_g_Filter.  (Note that the script uses the “GSP (num)” custom function to retrieve parameters – that’s from the Brian Dunning site, specifically here.)

This script is a much simplified version of the Nighthawk original (which allows for portal record selection, using arrow keys, and therefore has much more complex keystroke trapping).  But if you just want a simple filter, this does the job.  Note that this version receives 2 parameters, i.e. the name of the multikey field (P1), and the name of the search string input field.  (NB – I had trouble passing the contents of the input field, as, at the moment of the script being triggered, the INT record hasn’t been committed, so I found it easier to pass the name, then do a commit, then a GetField.)

 


How to show the number of portal rows

Often, I need to show the number of records shown in a given portal.  This may or may not be the same as a found count, or total record count – when using portal filtering, different portals can show different subsets of the related records, so it’s not as simple as just showing the total number of records.

The easiest way seems to be to use a Summary field from the related TO – usually just a simple “count of ID”.  The value in the summary will change according to where the field is displayed, and in a portal, it will show the number of related recs shown in that portal.

The problem with this is that, in order to show the number of records in the portal, it has to be shown IN the portal, and therefore on every portal row.  But usually you won’t want it displayed on every portal row – usually I just want it once, at the top of the portal.  So the trick is to DUPLICATE the portal, and shrink it to the absolute minimum, i.e. one row, no scrollbar, and resized to just the size of the one summary field.   Obviously it needs to be FILTERED in the same way as the portal which is actually displaying the records.

Here’s an example:


The main portal shows a filtered set of records from the Attendances table.  The “mini-portal”, which is simply the “3” record count, is a miniaturised copy of the same portal, containing the one Summary field, ATT_s_Count, defined as “Count of ATT__kp_ID”.  The same field is used elsewhere on the same layout, to show the count of a differently filtered subset of Attendance records.


Filtering portals – Method 1, using FM’s native portal filtering

This method of portal filtering uses the actual Filemaker “filtered portals” feature, which appeared in Filemaker 10, I think.  This is a great way to reduce the number of relationships in a solution (portal filtering doesn’t rely on relationships).  Matt Petrowsky showed how to use the feature elegantly and dynamically – i.e. the contents of the portal changes according to a search string entered by the user – although native FM portal filtering is not great for very large numbers of records.  For that (i.e. more than a couple of hundred records), it’s better to use a relationship-based filter – see Method 2.

A combination of techniques is often needed: I use both on one layout – Method 1 to filter the portal showing names of people enrolled on a Programme (never more than a few hundred, and usually much less), and Method 2 to filter the portal showing all people on the system (several thousand).

So, to deal with the first method first, the “Expected” portal above uses the following:

A Text field in the parent table (in this case Sessions), called SSN_Filter.  This is the field that the user enters the search string.

A triggered script, “Filter Portal”, run when this field is modified (OnObjectModify).  The script does this:

i.e. basically it commits the record (including the contents of the SSN_Filter field), then takes the user back to the filter field (the name of which has been stored as $object), to continue entering the search string.

The portal filter itself is defined as:

i.e. show records when either there’s nothing in the filter field (the “IsEmpty” bit at the end of the expression, which will have the effect of showing ALL records) or there’s data in the filter field, and that data matches something in the first name or surname fields.

Simple, but clever, EXCEPT for one problem.  Because the SSN_Filter field is not a global, this method is not really suitable in a multi-user setup where it’s likely that more than one user will be accessing the same SSN rec.  That said, the SSN_Filter field is not really “data”, so it’s no big deal if users trip each other up, but it’s not perfect.  Also, on the filter input field, make sure “auto-complete” is switched OFF, otherwise it will display the last stored value if the user enters the same starting value.

See later post for explanation of how Method 2 uses global fields and relationships for a more robust method more suitable for larger data sets.