Using eSQL to filter a “QuickFind” portal

(Note – this is an update of an older post about filtering portals via relationships.)

The Requirement: A quick, user-friendly “spotlight” style search option, to filter a portal of people’s names.

Demo file: QuickFindDemo.zip – downloadable from the “Box” over on the right.

The Interface

Here’s how the finished interface looks:

qf1

Using the FM’s nice “River” theme, we have a QuickFind box in which the user enters a string, and, as they type, the list expands or contracts to show matching records.  (Note that the “Search” button in the screenshot is not part of this technique – that opens a popover in which the user can specify a number of other criteria – show only people over a certain age, who have attended this month, etc.)

The input field is a global field within the table to be searched.  In this example, it’s called PPL::FILTERINPUT.

When this field is empty, it looks like this:

qf2

The magnifying glass icon and the “QuickFind” text are hidden, via the very useful “hide object when” feature, the condition here being “not IsEmpty (PPL::FILTERINPUT)”.

The Relationship

The portal showing the list of people’s names, plus other information as required, is based on a self-join.  I’ve called the table occurrence PPL_PPL~filter with the self-join relationship defined as:

qf3

In the PPL table, “id” is the primary key.  So the portal will show anybody whose id appears in the global field PPL::FILTERIDS.  (We can ignore the “isDead” thing, which just excludes dead people from the list.)

So, how does PPL::FILTERIDS get populated?

Populating FILTERIDS

The population of the match field involves two scripts, “Initiate filter” and “Filter portal”.  The second one, which does the searching, ONLY runs after a given pause between the user’s keystrokes.  This is important, because you don’t want the searching script to run every time the user presses a key – it would be too slow.  The duration of the delay that you allow depends mainly on the typing speed of the user – I’ve settled on 0.35 seconds.

So, the first script, “Initiate Filter”, runs EVERY time the user presses a key – it’s an OnObjectModify triggered script attached to PPL::FILTERINPUT.  This is what it does:

qf4The script is passed two parameters, one is the name of the field that we want to populate (i.e. PPL::FILTERIDS), and the other is the object name (not the field name) of the input field – I’ve called mine “PeopleNameFilter”.

These two parameters are stored as global variables, for use in the subsequent script. (Note that I use the GSP custom function to “get” these parameter values – it’s available here on Brian Dunning’s site).

We then use “Install OnTimer Script” to schedule the searching script every 0.35 seconds – this has the effect of allowing the user that amount of time to press another key – if s/he modifies the field again within 0.35 seconds,  the script runs again, and the new OnTimer Script replaces the previous one.

When the user has finished typing (i.e. 0.35 seconds has elapsed), the “Filter Portal” script runs.  It looks like this:

qf6

If the input field is empty, we just initialise the multi-key field.  This will have the effect of showing no records in the portal.  We then use “Install OnTimer Script” again, with a zero interval, to cancel the repetition of the current script.

If we have an input string, we use a fairly straightforward ExecuteSQL statement to find the matching ids.  Here’s the eSQL statement:

ExecuteSQL(“SELECT id FROM PPL WHERE searchableData LIKE ? ” ; “” ; “” ; “%” & PPL::FILTERINPUT & “%” )

PPL::searchableData, the field searched by the ExecuteSQL statement, is a concatenated text field, including whatever pieces of data you want to search on.  (I’ve experimented with making this a calculated field, but have found it much more efficient to have it as a plain text field, updated by a script whenever relevant data changes.)  In this case, the field is updated as:

lower(nameFirst & “ “ & nameLast & “ “ & address1)

So, at the end of all this, we have PPL::FILTERIDS populated with the primary keys of all records where matching text has been found (anywhere within the searchableData field – it doesn’t have to be at the beginning), and the portal showing those records.  If there are NO matching records, we say so, then cancel the OnTimer script (by installing another (blank) one), and go back to where we came from.

Note that the use of parameters is optional, but makes it possible to use it elsewhere, should we need other similar QuickFinds.

As ever, please comment if you can suggest a better way of doing this.  None of it is really of my own invention, and is documented here to remind myself of how it works, but if it’s of use to you, so much the better!


How to go to a specific portal row

The requirement:   In a portal, you need to be able to highlight a given record, and ensure that the portal row is visible.  (For example, a portal contains 20 records, but displays only 3.  You know which record you’re interested in, and you have the id of the record stashed in a global field, but the record may be out of sight when you display the portal.)

I devised a functional, but clunky, way of achieving this, then asked for advice on the Filemaker Technet.  I got 2 really good suggestions.  The first, from David Jondreau, suggested this:

portal_row_highlight1

So, the portal shows sessions (SSN) within a programme (PGM), and the id of the session that we’re interested in is held in the global field PGM::CURRENTSESSIONID.  The loop just goes through the portal rows until it hits that record.  “Go to portal row” ensures that it’s highlighted and visible.

The second suggestion, from Mike Beargie, uses a custom function called “ListIndex” (available from Brian Dunning’s site).  It’s a really concise one-liner:

portal_row_highlight2

The ListIndex CF just finds the position of the global id within the list of related SSN rec ids, and goes to that portal row, all in one script step.  Really neat, but, as Mike pointed out, it relies on the portal not being filtered.


Family Relationships – Design Walkthrough

In “FOCUS”, the application that I spend much of my time looking after, we need to record family networks.  This post explains how I’ve accomplished this.  As ever, it’s included here partly for my own documentation needs, but also in the hope that a) it may be interesting to others, and that b) others may be able to tell me how to make it better.

(Note – Obviously, throughout this explanation, be careful not to confuse family relationships with Filemaker relationships.  Hopefully, it’s clear which is which.)

People in FOCUS can be linked to each other to reflect family relationships.  The Relationships (or REL) table is the join table which describes the relationship.  For most relationships, there are 2 REL records, i.e. one for each side of the relationship.

So, for example, Homer Simpson is the father of Bart Simpson, so there are 2 REL records:

  • Record 1 tells us that Homer is the father of Bart.
  • Record 2 tells us that Bart is the son of Homer.

These are the main fields involved:

rel_field_defs

  • REL__kf_PPL_ID_1 and REL__kf_PPL_ID_2 – these are the 2 foreign keys to the PPL table, i.e. the 2 people in the relationship.
  • REL_1’sGender and REL_2’sGender – the gender of each.  This is needed in order to control:
    • The value list of potential relationships for this person – e.g. a female cannot be a father.
    • The “antonym” for the relationship – e.g. once we know that Homer is Bart’s father, we need to know whether Bart is male or female to know whether Bart is Homer’s son or daughter.
    • REL_1is2’s___ – i.e. the nature of the relationship from the perspective of REL__kf_PPL_ID_1.  If ID_1 is Homer, this field holds “father”.
    • REL_2is1’s___ – i.e. the nature of the relationship from the perspective of REL__kf_PPL_ID_2.  If ID_2 is Homer, this field holds “son”.

In the complementary record, the IDs (and therefore the descriptive fields) are the other way around.  This post doesn’t cover the writing of these REL records, but it’s fairly straightforward.  (Maybe I’ll document that later.)

Showing related people

In the various portals etc., we use a combination of filtered portals and relationship-driven portals to show the various types of related people.

Showing all related people

To show all people related to a person, a portal displaying “PPL to REL” is the easiest way.

The PPL to REL relationship is thus:  PPL__kp_ID = REL__kf_PPL_ID_1.

The extension of the relationship is “back” to PPL, to see details of the related person.  The PPL to REL to PPL relationship is thus:

PPL__kp_ID = REL__kf_PPL_ID_1
REL__kf_PPL_ID_2  = PPL__kp_ID

The portal of all relationships when REL__kf_PPL_ID_1 is Homer will display (for example):

  • PPL to REL to PPL::PPL_Name  (i.e. “Bart”)
  • PPL to REL::REL_2is1’s_____  (i.e. “son”)

Showing specific types  of related people

Portal filtering is the best way to control what types of relationships are shown in a given portal.  (This would not be the case if we were dealing with more than a few relationships per person, after which point portal filtering becomes rather slow.)

So, for example, if we need a portal to show the parents of Bart, we would use the portal described above, but filter it, thus:

rel_filter1

Portals to show “children of…” etc. are similarly straightforward, along with perhaps a “catch-all” portal to show “other related people”, using a “not equals” filter, thus:

rel_filter2

Showing siblings

Siblings presents a special case.  Although it’s possible to specify that “1 is 2’s bother/sister”, we shouldn’t have to, when it can be deduced (i.e. if 1 and 2 have one or both parents in common).

So for this we can drive the portal with a relationship, thus:

PPL to REL to PPL to REL to PPL, where related people at the right end of the squid leg are the siblings of the person at the left end.

I’ve used match fields – so the first PPL to REL relationship is thus:

rel_rel1

(where PPL_MatchTypeParent holds “mother[cr]father”).  So, using this relationship, PPL to REL to PPL gives the parents of the person on the left.

For the second “step” of the relationship, i.e. PPL to REL to PPL to REL, we use another match field, PPL_MatchTypeChild, which holds “son[cr]daughter”.  The relationship is set up thus:

rel_rel2

So, where the first step of the relationship gives us “parents of Bart”, the second step gives us “children of the parents of Bart”, i.e. Lisa, Maggie, plus, of course, Bart himself.

All that remains is to show the “siblings” portal, based on PPL to REL to PPL to REL, with, crucially, a filter to exclude the person at the start of the chain (Bart), thus:

rel_filter3


Efficient Refresh With Cartesian Join

This tip was first brought to my attention by Matt Petrowsky on his excellent magazine site, but he credits Daniel of Digital Fusion in New Zealand with having discovered what is, essentially, a very simple way of refreshing portals without using the option to “Flush cache to disk”.  Daniel explains the technique very well here, and explains why unnecessary flushing of the cache is such a no-no, especially in a WAN setup.

He calls this the “Cartesian Refresh”.  Basically, you just add a Cartesian element to any relationship which is giving refresh issues.  Setting the global field on one side of the relationship forces the relationship to be evaluated, without having any other effect on the relationship.  So it’s as if you’ve done a flush, and the portal relying on the relationship gets refreshed, but the effect on system performance is minimal.

Here’s a simple example.  In the previous post, I wrote about  creating records using a “magic key” as part of the create relationship.  In this modified version of the script, I’m using a Custom Dialog to collect a value for the Description field:

This works fine, but only because of that last “Refresh Window”, which I now realise is dodgy, for all the reasons explained by Daniel.  Without that step, the portal which displays the related CUP records doesn’t show the newly created record (even though it’s there).

So, enter the Cartesian Refresh.  All we need is an additional match criterion for the relationship, relating any 2 fields via a cross-product, or Cartesian join, like this:

So the relationship is now basically saying, “Give me everything” AND “Give me only the records where the CHG_ID matches” – i.e. the Cartesian bit actually has no effect on the relationship OTHER than to force it to be refreshed.

So now we can get rid of the Refresh Window script step, and all the performance drawbacks that come with it.


How To Sort Portals by Clicking on Column Labels

UPDATE – the technique described below is a good one, and works fine, but Kevin Frank has described a more elegant and portable way of doing it here.  The concept is essentially the same, i.e. having the portal permanently sorted on the same two fields, but Kevin makes it much more straightforward.  Note, though, that his solution relies on global variables, which won’t work in a separated solution – in this case, you need to use global fields instead.  He explains this in his own post.

Anyway, here’s my original post, for posterity. But do it Kevin’s way!


This is a handy technique for doing quite elegant sorting on a variety of column headings.  The behaviour is the standard “toggling”, i.e. click once to sort on that column in ascending order, click again to sort descending.  The nice thing is that it only uses 2 sort fields, the contents of which are set by a simple script, which makes it very portable.

So the portal is permanently sorted on 2 fields, “ascending” and “descending”.  The one you’re not sorting on always contains “1”, to ensure that it doesn’t appear to affect the sorting of the portal.

The other component is a global field to hold the current sorting selection.  This is used for 2 things – i.e. toggling the preference (e.g. if it’s set to “ascending name order”, we’ll want to reverse that to “descending name order”, and vice versa), and driving the calculation on which the sorting is based.

Step 1 – Create “Current Sort Order” Global Field

In the appropriate “globals” table (INT in this example), add a field to contain the current sort order, e.g. INT_g_SortPref.  It’s just a global text field – you could use a global variable, but not if you are using Data Separation (global variables don’t span files).

Step 2 – Create Sort Fields

In the source table (on which the portal is based), add 2 fields to define the sort order, ascending or descending, named (e.g.) PGM_i_SortAscending and PGM_i_SortDescending.

The fields must be:

  • Calc
  • Unstored
  • Always evaluate
  • Result – Text

The calc should be something like this:

PGM_i_SortDescending will be similar, but with the field names associated with the descending settings, i.e.

This is what happens when the user clicks on a column heading:

  1. A script is performed, to set the global INT_g_SortPref field to the value passed to the script as a parameter.  In this example, the parameter will be “PGM_Date_a”, or “PGM_Date_d”, etc.
  2. Because the calc of PGM_i_SortDescending and PGM_i_SortAscending depends on INT_g_SortPref, the updating of  INT_g_SortPref causes the calc in PGM_i_SortDescending and PGM_i_SortAscending to be re-evaluated.  So, for example, if user has opted for “Date descending”, INT_g_SortPref gets set to “PGM_Date_d”, which causes PGM_i_SortAscending to be set to “1” in every PGM record, and PGM_i_SortDescending to be set to the value in the specified date field in each PGM record.  (NB – these sort fields are not globals, therefore likely to have a different value in each record.)

So in the in the calc, there’s a pair of possible values for each sortable column.  In the SortAscending calc, all descending fields are set to “1”, and the ascending fields are set to the field contents of the field on which we want to sort.  (Note the need to use “GetAsNumber” for  date fields – this is because the result of the calc is text, which will not sort dates correctly otherwise.)

Step 3 – Sort the Portal

Enable sorting for the portal, and specify BOTH sort fields, as shown below:

i.e. the portal is permanently sorted on both fields.  This is why we need to put “1” in the descending sort field when we want to sort in ascending order, i.e. PGM_i_SortDescending makes no difference to the sort.

Step 4 – Create the Sorting Script

It’s not really a sorting script at all – it just stores the sort order selected, which has been passed as the script parameter.  The sorting itself is done automatically by Filemaker’s own “portal sort” functionality, based on the contents of PGM_i_SortDescending and PGM_i_SortAscending.  The trick is in setting those sorting fields according to the column label clicked by the user.  (Note – I’ve used the “GSP” custom function here (from the Brian Dunning site), but “Get(ScriptParameter)” will do fine.)

Step 4 – Attach the Script to Buttons in Column Headings

For each column heading, create a button which calls the script with the sort order as a parameter.

Step 5 – Add Sort Order Indicators

There needs to be an indicator for each sortable field.  The easiest way is just to have a pair of conditionally formatted text objects at the top of the column,  i.e. ▲ and ▼.  The conditional format formula is (e.g.) INT::INT_g_SortPref  ≠ “PGM_date_a”, text colour is white (i.e. only show the symbol when the relevant sort order is in effect).


Easy Toggling of Attributes

The Requirement: A quick, user-friendly toggling of people’s “attributes”.

Demo file: ToggleAttributesDemo.zip – downloadable from the “Box” over on the right.

I’m now using a very efficient method for toggling fields.  A typical example is with “attributes” in the application I’m currently working on.

First, a bit of background. The users wanted to be able to assign “attributes”, or labels, to a person’s record.  This attribute could be anything – “Homeless”, “Likes drum and bass”, etc. – and will be used for reporting, producing mailing lists, etc.  The attributes have to be user-definable, and a person can have as many attributes as necessary (although typically not more than about 10).

I did think of storing each person’s attribute allocations in a text field, with a value list enabling the user to select as many of the available tags as they needed via a check-box display.  But, thinking ahead, this would have limited the usefulness of attribute allocations when it came to reporting.  Much better (if a bit more work) to have a separate table of “attribute allocations”, i.e. a join table between the People table and the Attributes table.  This would not only increase my reporting options later, but also enable the storage of details of the tag allocation (reason for allocating it, date, etc.).

So, 2 new tables:

  • Attributes (“ATR”), with primary key, attribute name, plus any other details needed.
  • Holder Attribute Allocations (“HAJ”) – primary key, plus 2 foreign keys (to People and Attribute  tables), and any other details that we may need.  (Note the use of “Holder”, rather than “Person”, as this will be used for other entities which can hold Attributes.)

In the People table, a couple of new fields:

  • AttributesHeld, i.e. a calc field, listing all the Attributes held for this person.  The calculation is defined as List ( PPL_HAJ_ATR::id).
  • ATR_ID – i.e. a global field, which will be used to hold the ID of the selected attribute, and will be used to check whether the person already has the attribute or not.

Re. relationships, as well as the obvious relationships linking the PPL, HAJ and ATR tables, I added a relationship from PPL to HAJ.  I called it “PPL_HAJ~newRecCheck”, defined as:

toggle_atr2

So, it uses the global “ATR_ID” field and the PPL ID field to check for the existence of the record.

And now the method for allocating attributes , which is the main point of this post.

On the People Details layout, we have a list of available attributes, in a portal.  If this person has this attribute, conditional formatting highlights the portal row:

FilterList ( PPL::attributesHeld; “Equals” ; PPL_ATR::id ; 0 )  ≠ “”

So, using the excellent “FilterList” custom function (see Brian Dunning’s site), we can check whether the ID of the current portal row’s attribute exists in the person’s currently held tags.  If so, we highlight it.

A button on the portal row runs the “Toggle Tag” script:

toggle_atr

 

This is what it does:

  1. Set the global field to the value of the attribute under consideration.
  2. Use the “creator” relationship to check for the existence of this attribute allocation (i.e. whether this person already has this attribute ).
  3. If so, delete the attribute allocation.
  4. If not, create it.

The record creation is the nicest part of all of this.  By using “Set Field”, we not only create the record, but also populate the field that we’ve set (obviously), but also the other field(s) used in the relationship – in this case, that’s the PPL_ID field, the foreign key back to the People table.

The icing on the cake is that we don’t even need to refresh the window to toggle the attribute in the Person Details screen – it refreshes itself, presumably by virtue of the Commit.  The whole thing happens as calmly as if I had used the check-box/text field option, but with all the potential extra functionality that the related table option provides.

In getting this sorted, I was grateful for help from various contributors to the Filemaker Developer Forum and from Matt Petrowsky’s excellent magazine site.


Data Separation – A Couple of Pitfalls

Well, the data separation exercise has been going pretty well, and I can really see how it will make life easier, now that several organisations will be using the system.  Swapping in a new interface file will be a doddle, with no need to import data etc.

BUT, there’s no gain without pain, and a couple of things recently have tripped me up.

1. GLOBAL VARIABLES

Global variables do not work across files.  So, where a variable is set in the interface file, and it is used to set a field value, or in a calc field, in the data file, it doesn’t work.  This has been an issue with the “portal sorting” technique that I use, whereby a variable is set to indicate the sort order to be used (“surname ascending”, “date descending”, etc.), and that variable is used to calculate the value in a field in the table (i.e the table on which the portal is based).  The solution is simple, i.e. to use a global FIELD (in the Interface table for example) rather than a global VARIABLE, but it was a bit of a shock to see a time-honoured technique just not work.

2. RUNNING SCRIPTS WITH [FULL ACCESS]

The way I’ve been handling deletions so far is to have delete not allowed for system users, but then, after all the pre-checking and confirming etc., running a delete script with [Full access] privilege.  Worked fine, until the data was separated out into a separate file – the [Full Access] thing doesn’t work across files.

So the possible solutions are:

  • Allow deletion to all users – obviously not really an option.
  • OR…do all the scripting of the deletion routine in the data file – rather messy, and spoils the principle of having no scripts “over there”.
  • OR…set the delete protections to “limited”, and allow deletion only in certain circumstances.

I decided to go for the last option, as it was the easiest to implement, and seems pretty secure.  I have a global field in the interface table (which can obviously be seen throughout the system) called INT_g_DeleteOK.  Once all the deletion checking has been done, the script sets this field to “Yes”, does the delete, and IMMEDIATELY sets the field to “No”.  Obviously, until the field is set, it is empty, so testing for INT_g_DeleteOK = “Yes” seems pretty bullet-proof.