The requirement: In a portal, you need to be able to highlight a given record, and ensure that the portal row is visible. (For example, a portal contains 20 records, but displays only 3. You know which record you’re interested in, and you have the id of the record stashed in a global field, but the record may be out of sight when you display the portal.)
I devised a functional, but clunky, way of achieving this, then asked for advice on the Filemaker Technet. I got 2 really good suggestions. The first, from David Jondreau, suggested this:
So, the portal shows sessions (SSN) within a programme (PGM), and the id of the session that we’re interested in is held in the global field PGM::CURRENTSESSIONID. The loop just goes through the portal rows until it hits that record. “Go to portal row” ensures that it’s highlighted and visible.
The second suggestion, from Mike Beargie, uses a custom function called “ListIndex” (available from Brian Dunning’s site). It’s a really concise one-liner:
The ListIndex CF just finds the position of the global id within the list of related SSN rec ids, and goes to that portal row, all in one script step. Really neat, but, as Mike pointed out, it relies on the portal not being filtered.
I’ve been getting to grips with the ExecuteSQL function. To say that I’ve been enjoying it would be partly true – but it can be a little tricky, not without its frustrations, and disappointing in its performance. BUT, I’m beginning to see just how powerful and useful it could be. There are a couple of excellent resources that I’ve found very useful: one is Filemaker Hacks – Kevin Frank’s articles about ExecuteSQL on his blog are excellent. Kevin writes so clearly, and presents the material so well, that even when he’s repeating stuff that I already know (which isn’t often!), it’s a delight to read his articles. Of particular relevance here are:
Also, Beverley Voth’s “The Missing Filemaker 12 ExcecuteSQL Reference” – there’s a link to this on FilemakerHacks too – here.
Here are some examples of expressions using the function:
Basic List of Field Contents
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE nameLast = ‘Smith’ ” ; “” ; “” )
This will give a list of the first names of people whose last name is Smith. A couple of things that tripped me up, even at this basic stage:
- “PPL” is a table occurrence, not a base table name.
- ‘Smith’ is case sensitive.
To get around the case sensitivity issue, I’ve started changing all text to lower case within the expression, e.g.
ExecuteSQL ( “SELECT nameFirst FROM PPL WHERE LOWER(nameLast) = ‘smith’ ” ; “” ; “” )
The next step is to use the query term (“smith”) as an argument, rather than including it in the SELECT statement, e.g.
ExecuteSQL ( “SELECT count (*) FROM PPL WHERE LOWER(nameLast) = ? ” ; “” ; “” ; “smith”)
So now we start to see how ExecuteSQL can take the place of FM Finds. Want to know how many Sue Smiths you’ve got?
ExecuteSQL ( “SELECT count (*) FROM PPL WHERE LOWER(nameLast) = ? and LOWER(nameFirst) = ? ” ; “” ; “” ; “smith” ; “sue”)
And, of course, the arguments can use fields instead of the literal values in the example.
There’s been a lot of excitement about the fact that ExecuteSQL can be used for querying related data, without the need for that relationship to be defined via Filemaker’s relationship graph. This can mean simplification of the graph – often quite complex relationships have to be drawn on the graph just for the sake of a simple calculation of related data.
So, moving on to a basic JOIN, i.e. relating 2 tables and getting related data without (necessarily) having a relationship in Filemaker, say we want to know how many Sessions there are for a given Programme (where a Programme can have zero or many Sessions):
ExecuteSQL(“SELECT COUNT (*) FROM SSN AS s JOIN PGM AS p ON s.id_PGM = p.id WHERE p.id = ‘PGM00002866’ ” ; “” ; “” )
Note how we allocate an alias of “s” to use when we refer to the SSN table occurrence later in the expression, and “p” for the PGM table occurrence.
The “AS” in the previous example is optional, and the whole thing can be rearranged to be a bit more readable, e.g.
FROM SSN s, PGM p
WHERE s.id_PGM = p.id
AND p.id = ‘PGM00002866’ ” ; “” ; “” )
And now for a 3 level “find”:
ExecuteSQL(“SELECT S.id FROM SSN s, PGM p, PTI t
WHERE s.id_PGM = p.id
AND p.id_PTI = t.id
AND LOWER(t.title) = s.FILTERINPUT ” ; “” ; “” )
BUT (and it’s a big, disappointing BUT), ExecuteSQL can be very slow. In the last example above, looking for matching text in the PTI table, the calc takes about 6-8 seconds to query a set of about 24,000 records. That may not be a big deal when running a report, or an off-line process, but on an interactive screen (I was trying to use it for a “spotlight search” function), it’s clearly not good enough. In this situation, it looks like you’re better off “denormalising” the data (i.e. repeating, in the parent table, the field you want to query).
“Virtual List” in Filemaker is not really a technique, but a concept. And it’s one that, although tricky to understand (at least for me!), is actually a very simple one, albeit with scope for complexity that can be rather scary. So I’m creating a series of steps, starting with this, the most simple of implementations, and working upwards in terms of complexity.
The requirement for this first example is simple: we have a table of People, and we want to produce a listing of each person’s first name. Obviously this first example does not even need a Virtual List – we could simply have a layout showing the values directly from the table – but diving straight in with a more complex example runs the risk of clouding the simplicity of the concept. (When I was trying to get my head around this concept, the complexity of available examples was a problem.)
The components required for any use of Virtual List are:
- A global variable, e.g. $$PEOPLELIST, which will contain all the values for the list, in a return-delimited list. The return-delimiting of the list means that each value in the variable will have a “value number”, and any value can be referenced using the GetValue function, e.g.
- A Virtual List table, pre-populated with the largest number of (blank) records you will ever need – i.e. the largest number of values that will be held in $$PEOPLELIST. So, for example, if you’re going to use Virtual List for a report based on People, and you have currently 8,000 People records, you should probably create 10,000 blank record in your VL table. Each record must have a sequential id, starting from 1. There must be no gaps, and records must not be deleted, i.e. there will always be 10,000 records, numbered 1 through 10000. (There is another field in the Value List record, “name”, about which more later.) Obviously, we use a script to populate the table in the first place, like this:
- A script to create and populate the variable. This will result, for example, in a return-delimited value in $$PEOPLELIST for each record in the People table. Note that, if the field you’re concerned with has blank values in any records, a blank space needs to be written to the variable. (We’ve seen that each value in the variable has a “value number”, and this will, in time, equate to the serial number of the record in the table. Hence the need for there to be no gaps.)
So far, so very simple. All we’ve done is write a variable with a bunch of people’s names – no big deal. But what’s been going on in the Virtual List table while we’ve been doing this?
Well, the vital component to the whole Virtual List concept is a field in the Virtual List table, which I’m calling “name”. This is an unstored calc field, defined as:
GetValue ( $$PEOPLELIST ; id )
So, into this field we automatically get the contents of the relevant value in the global variable. The relevant value is obviously the nth value, where “n” is the id of the record, which in turn is the number of the value in the variable. So if the 823rd person in the original file has a first name of George, because we’ve set the 823rd value in the variable to that name, we automatically have “George” in the name field in the Virtual List table. Note that the records in the Virtual List table don’t need to be written or updated – the unstored calc is automatically refreshed when the global variable on which it depends is updated.
We can now use our Virtual List table as the source for, perhaps a report, listing people’s names.
At this stage, this may seem a) obvious, and b) pointless – but stick with it! In future posts, we’ll be looking at how this can be really useful. For now, consider the possibilities if, instead of just one global variable, we have several. We may, for example, want to build a reporting table including related values (from other tables) for each person. That’s when it starts to get really useful.
A recent post on the TechNet forum, plus a succession of typically helpful answers, provided a very useful tutorial in how best to use the Case function to calculate a marked up cost, with the markup being based on the value of Cost.
Here’s the original formula. “Why won’t it work?” was the cry for help:
Cost < “5” ; Cost * “4” ;
Cost > “5.01” and Cost < “15” ; Cost * “3.5” ;
Cost > “14.99” and Cost < “25” ; Cost * “3” ;
Cost > “24.99” and Cost < “40” ; Cost * “2.75” ;
Cost > “39.99” and Cost < “60” ; Cost * “2.5” ;
Cost > “59.99” and Cost < “100” ; Cost * “2.25” ;
Cost > “99.99” and Cost < “200” ; Cost * “2” ;
Cost > “199”; Cost * “1.6”)
Well, with the field “Cost” defined as a number, and the calculation result defined as a number, the first step is to remove the quotes, which makes Filemaker see the numeric values as numbers, rather than as text.
Cost < 5 ; Cost *4 ;
Cost > 5.01 and Cost < 15 ; Cost * 3.5 ;
Cost > 14.99 and Cost < 25 ; Cost *3 ;
Cost > 24.99 and Cost < 40 ; Cost * 2.75 ;
Cost > 39.99 and Cost < 60 ; Cost * 2.5 ;
Cost > 59.99 and Cost < 100 ; Cost * 2.25 ;
Cost > 99.99 and Cost < 200 ; Cost * 2 ;
Cost > 199; Cost * 1.6)
This will now work, but it’s making much harder work of the calc than it needs to. Filemaker only works through the Case statement until it meets a true condition, then it bails out – there’s therefore no need for all of these > and < evaluations.
So, enter V3:
Cost < 5 ; Cost * 4 ;
Cost < 15 ; Cost * 3.5 ;
Cost < 25 ; Cost * 3 ;
Cost < 40 ; Cost * 2.75 ;
Cost < 60 ; Cost * 2.5 ;
Cost < 100 ; Cost * 2.25 ;
Cost < 200 ; Cost * 2 ;
Cost * 1.6)
(Note the final default value, 1.6, which applies to any value of Cost equal to or greater than 200 – no need for an evaluation there; if we get that far, that’s the value to use.)
The next efficiency is to use calculation variables, for the value of Cost, and the markup to be applied.
So, finally, V4:
Let ( [
$Cost = Cost ;
$Markup = Case (
$Cost < 5 ; 4 ;
$Cost < 15 ; 3.5 ;
$Cost < 25 ; 3 ;
$Cost < 40 ; 2.75 ;
$Cost < 60 ; 2.5 ;
$Cost < 100 ; 2.25 ;
$Cost < 200 ; 2 ;1.6)];
$Cost * $Markup)
But then came another interesting point, i.e. whether the Cost field itself is the right place for the calculation to happen. In the case of a “net price” such as this, which may need to be overridden, the better way is to have a separate Markup field, defined as an auto-enter calc, with the initial value of the markup amount calculated as above. The (new) Price field is then defined as a simple calc field, i.e. Price = Markup * Cost. On the Markup fields, the option “Do not replace existing value of field (if any)” should be ticked, to prevent historical and “override” data being replaced.
For a more comprehensive look at calc and auto-enter fields, see here. Thanks, as ever, to the Technet contributors who combine to make it such a great resource.
In the process of re-writing “FOCUS” in FM12, I’m experimenting with different ways of recording people’s attributes. I’ve realised that there’s logically no difference between a person’s gender, and the fact that s/he likes football – both of these can be regarded simply as attributes, i.e. “IsFemale” and “LikesFootball”. In terms of reporting and counting, I think it will be much easier to treat all attributes equally, i.e. that a person either has them or doesn’t.
A while ago, I documented the way that I’ve been handling simple “on/off” attributes, which I’ve called “tags” in the current version of the system. Here’s that blog post – it seems to work pretty well, and the users have said that they like it.
However, because this deals only with “on/off” tags, the technique needs a bit of refinement – if I’m going to use it for things like gender, sexuality, ethnicity, etc., I need to build in the “switching off” of any currently held setting. After all, if a person is already on the system as “female”, and we realise that we’ve recorded their gender wrongly, we can’t simply add the “IsMale” attribute – we also need to remove “IsFemale”.
So, on the Attribute record, in addition to holding the Attribute itself (“IsMale”, “Likes football”, etc.), I’m now also storing an indicator, “IsExclusive”, and an attribute type (“gender”, “sexualilty”, “tag”, etc.). If the “IsExclusive” indicator is set to 1, it means that if a person has one attribute of this type, s/he can’t hold another.
I have to say that all this seems a bit over-complicated when the alternative is simply to have a “gender” field, possibly with a couple of Boolean calcs (IsMale and IsFemale), to be used for counting and reporting. But there’s a real attraction in having all attributes in one place, and handled in the same way.
If you have any thoughts, opinions, or experience in this area, please comment!