|
|
Thread Tools | Display Modes |
#131
|
|||
|
|||
" wrote: gwhite wrote: I'm talking about the 0.1% noise rating for the machine. I'd question a peak noise rating for a machine. Caveat: I don't know squat about the dope detecting industry. I'll read the links when I get some time. (I don't pay attention to the dope stuff much.) They just toss out the 0.1% number in a parenthesis. It's not clear if it really is a property of the machine or just the particular measurement they are talking about in that one paragraph. It should be RMS. I could not find a datasheet/specsheet on the machine. I found something close, but stunningly a noise spec isn't given (or I didn't see it). http://www.beckmancoulter.com/Litera..._monograph.pdf http://www.beckmancoulter.com/produc...ry/epicsxl.asp In any case, everyone else talking about "peak" is referring to the number of counts in a signal that looks like a peak, not the peak height of the signal, ... I'm not clear on how the counts could simply be on the peak (max) bin of the lobe(s). The count for the total of each lobe (with the width considered) would seem to be more sensible. That is, integrated counts according the particular fluorescence lobe. "Where to cut off" counting would be where adding more counts made no "significant" difference to the total count of the lobe. I couldn't spend more time studying this and could be way off base regarding how things are counted and how the test is done. I am not familiar with the field or the lingo, so it is a bit of a struggle at the base of the learning curve. ... which is what you are referring to with "peak noise" and the peak-vs-RMS jitter reference. (I agree "peak noise" doesn't mean anything.) I'm saying a peak rating to specify the machine noise does not make sense. For random noise, the peak isn't known. Wait long enough, and you'll get a higher peak. ("Wait longer" to get a higher p-rms doesn't necessarily help Tyler, because "waiting longer" means a lower probability of a false positive.) But peaks certainly do matter for the *measurement*. That "bit error" talked about in the article I linked is caused by the peak exceeding the threshold (not AVG-RMS) and is exactly what can result in a false detection. (The threshold detector itself has an uncertainty.) Communications systems are designed around the error ratings (noise). Peak values certainly matter, and are considered in a probablistic way. Chung is questioning how they reliably tell that a sample has two signals rather than one, which is a good question. Look at the pdfs, but do like a good scientist: don't read them, just look at the pictures. It will be clear what "peak" refers to in the plots. Most of the plots, anyway; that's the issue. The main instrumentation/measurement area I thought was interesting was the "gating" area. Campbell said false levels of up to 1.1% could be found with operator error. In my business we don't just consider Signal/Noise ratios. We more completely care about Signal/(Noise + Distortion + Spurious) ratios. For this realm, I suppose operator error would be called a spurious result. We can't dismiss the operator from the system, especially if the operators have been shown to make significant errors. The machine's accuracy and precision is one part of the system, not the whole system. I think I also read Hamilton's reading was 1.3% (did I?). If so, that 1.3/1.1 ratio is uncomfortable, to say the least. I also read the comments on gating in the Nelson paper. I sounded like the adjustments are manual and that "mistuning" by the operator could occur. I was not sure that each of the lobes could not be individually mistuned, which would throw off a ratio. More curious was Figure 1A-B in the Nelson paper. The dilution seems to have *moved* the small fluorescence lobe! Why would the frequency (horiz axis) of the fluorescence change? I guess I really don't understand the machine and what it does. (I didn't read altra_monograph.pdf.) Also, if the count is an integrated one per lobe, the horiz count bar for the little lobe (Fig1B) is actually extended into the large lobe! This seems odd, and maybe that is what Robert is referring to. For the comments above, be like a good scientist and presume I don't know squat about the machine, its gating, and its method of counting. Based on the little bit I've studied, I would probably have thrown the case out for a few different reasons -- no judgement on Hamilton. That doesn't mean I don't have my suspicions. I can't believe I got sucked into a doper thread. |
Ads |
#132
|
|||
|
|||
|
#133
|
|||
|
|||
gwhite wrote:
In any case, everyone else talking about "peak" is referring to the number of counts in a signal that looks like a peak, not the peak height of the signal, ... I'm not clear on how the counts could simply be on the peak (max) bin of the lobe(s). The count for the total of each lobe (with the width considered) would seem to be more sensible. The counts aren't on the max bin (the mode) of what you are calling the lobe and everyone else is calling the peak. They are of the total. That is, integrated counts according the particular fluorescence lobe. "Where to cut off" counting would be where adding more counts made no "significant" difference to the total count of the lobe. The question is how to define this if the lobes/peaks aren't well separated or are noisy. For example, if there is no strong signal but just noise fluctuations, the signal you get keeps going up as you integrate over more bins, so "where to cut off" is not easy to define. I'm saying a peak rating to specify the machine noise does not make sense. For random noise, the peak isn't known. Wait long enough, and you'll get a higher peak. ("Wait longer" to get a higher p-rms doesn't necessarily help Tyler, because "waiting longer" means a lower probability of a false positive.) But peaks certainly do matter for the *measurement*. That "bit error" talked about in the article I linked is caused by the peak exceeding the threshold (not AVG-RMS) and is exactly what can result in a false detection. (The threshold detector itself has an uncertainty.) Communications systems are designed around the error ratings (noise). Peak values certainly matter, and are considered in a probablistic way. As long as you continue to use "peak" in a sense which is standard for a different field, but different from everyone else in this discussion, it's just going to get more confusing rather than less. Chung is questioning how they reliably tell that a sample has two signals rather than one, which is a good question. Look at the pdfs, but do like a good scientist: don't read them, just look at the pictures. It will be clear what "peak" refers to in the plots. Most of the plots, anyway; that's the issue. |
#134
|
|||
|
|||
" wrote:
gwhite wrote: ... so "where to cut off" is not easy to define. We agree. As long as you continue to use "peak" in a sense which is standard for a different field, but different from everyone else in this discussion, it's just going to get more confusing rather than less. The original comment was: JVD Sluis wrote: "A noise level of 0,1% would mean that the graph can show a peak of 0,1% without the corresponding level of RBCs being present." A noise level rating for the machine would mean some "peak," as folks are using it here, could be *greater* than 0.1%. The question is "how much greater" with respect to the "valid test threshold" and how probable it is that the threshold will be exceeded. Noise level *ratings* aren't specified in "peaks," regardless of the language. That's one reason why Campbell was concerned. (It's true I wasn't clear on how others were using "peak." Not surprising given that I didn't look at the papers till Friday, I think.) Yes, we can use "peak" to describe the integrated energy (or power) of the lobes, but that does not rule out using it also for describing random processes and the /maximum/ (greater-than-average, greater-than-mean, greater-than-root-mean-square) values implicit in random processes across time. Most people I know just call a localized maximum a peak -- that is what it is. The "peaks" that I'm talking about (and Campbell) are in the time-domain. The peaks in the plots are frequency-domain peaks. I'm accustomed to both domains, and the transformation between the two. For this test, the counts are positive. So without really knowing, we could make a WAG for the noise distribution of the instrumentation (plus/including the various lab technicians in various labs, and every other possible error) to be Rayleigh. There is no "peak limit" to that distribution. The question is how often (in time, then transformed to the f-domain) a given threshold is exceeded (the probability of a false positive) given the instrument and all the other errors that could happen, including operator error. The "peaks" are what do that. That's why Campbell wants the probability question answered. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Pannier bottom attachment point | Richard Amirault | General | 0 | July 28th 04 06:17 PM |
EPO and Cutoff Point | Churchill | General | 5 | July 3rd 04 10:13 PM |
What is the point of singlespeed bikes? | Julius | Mountain Biking | 34 | October 7th 03 03:24 AM |
Where do you point your saddle? | voodoo | Mountain Biking | 22 | August 21st 03 10:19 AM |
Glacier Point Magna 26" Bike Repair Question | sandy | Mountain Biking | 8 | July 21st 03 02:21 PM |