In other words, it seems like you need to compare (either multiple pens or multiple inks) in order to draw meaningful conclusions beyond (this ink from this pen looks thusly).
Does that make sense? Is that in lines with your thinking? Am I missing something?
Please be assured what you wrote makes sense.
This is my 'theory':
The pen, nib, paper, ambient temperature, humidity, air pressure, the age of the ink in the bottle, the length of time the ink has spent in the nib or converter or piston-filler barrel, and so on are all variables that could affect the flow, 'performance', and/or some other attribute of the
ink in question. When it comes down to it, 'my' Diamine Oxblood could be different from 'your' or some other reader's Diamine Oxblood; for all you, I and anyone else know, I could have contaminated my bottle of it, or it could have been contaminated during the manufacturing or packing processes. All I can review is 'my' bottle of it, not anyone's or everyone else's, and I cannot (or simply will not) repeat the tests using N>1
separate bottles of that type of ink to eliminate or average over the effects due to variation between them.
For objective, rigorous testing, noting the test conditions and recording all the known variables (especially if they can be measured, e.g. temperature, air pressure) would be important, as would understanding how and by how much each variable would influence the specific attribute(s) of the subject the particular test is designed to elicit and/or assess. However, as amateurs and enthusiasts, I don't think we're really so scientific, so diligent, so patient, and so willing to invest effort and resources into doing everything we can to get to the unadulterated essence of something.
That's why what we produce are just user reviews
– limited, one-eyed, loosely controlled, largely anecdotal information. Showing scans, photos and such will 'help' readers identify, isolate and/or filter out what are the reviewers' subjective perception biases, and supposedly present some representation of objective fact, but then scanners, cameras, monitors, ambient light can all introduce distortions.
So, at the end of the day, we just need to make a hell of a lot of assumptions about the other variables instead of controlling them (much less allowing prospective readers of a review to control them) all, and focus on how to observe and how to evaluate what is observed.
Without a metric or a scale, 'wet' or 'dry' is meaningless without comparing to something else in the same class as a 'known quantity', just as 'hot' or 'cold' would be if there is no temperature scale and no thermometer. In the worst case, the reviewer is comparing the observed ink flow against the (imagined, recalled) average of his/her own fuzzy aggregate of experiences with other inks and pens, then summarising the observation with a single adjectival word, but expect readers to already share his/her frame of reference. That's why I personally don't want to tell
others whether an ink is 'wet' or 'dry', any more than I want to tell someone whether a particular bowl of curry is 'spicy' or 'mild'.
By showing the scale of multi-pass swabs against writing samples on the same piece or type of paper, we're simply allowing readers to see and assess for themselves the level of saturation produced by the combination of pen, nib, paper, handwriting style and technique, yadda yadda.
For my own purposes, I intend to settle on one or more of the desk pens I recently bought
, as my 'standard' test equipment, thus helping eliminate or at least reduce some of the variables. The instrument will still be subject to the effects of contamination (from remnants of other inks, or greasy fingertips), effects of aging, wear and tear, and so on. Essentially, it's up to me to 'know' the tool and the condition it is in when a test is conducted. I'm not testing five or ten different inks on the same day with the same pen so that there is some calibration, and it would be pointless anyway when each flush and each fill can introduce yet more variation into the condition of the tool.
Better that than to just state, "I wrote it with a Pilot Vanishing Point F nib." Between my fiancée and I, we have maybe ten of those here accumulated over several years, and they don't all write identically. What does that really tell any reader who has a single Pilot VP pen (with an F nib), or none?
Ideally, once I have produced a number of ink reviews using my 'standard' test equipment, some semblance of a frame of reference can be arrived at by readers of those.
Edited by A Smug Dill, 26 September 2018 - 23:14.