|
Post by Professor Lake Shore on Jan 31, 2017 16:51:48 GMT -5
Can I use a coax cable for a low-current measurement in my probe station?
No, it isn’t possible because of the insulation leakage and charging currents encountered with a coax cable configuration. Triax cabling can eliminate both leakage and charging currents because in a triax configuration, the probe arm feedthrough and cabling as well as probe blade are fully guarded to the probe tip for reliable low current measurement performance (below 1 nA). For further info, read Lake Shore’s “Considerations for Low Current Measurements in Cryogenic Probe Stations” app note.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:50:01 GMT -5
How do I clean a sample holder?
This depends on what has been used on the sample stage. Do not use abrasives or scrub the sample holder; doing so will remove the gold plating. Most adhesives can be cleaned with acetone applied to a soft, clean cloth then rinsed with isopropyl alcohol. Apiezon N grease is not soluble in alcohols or acetone and will need to be removed with Xylene. When using VGE 7031 varnish for semi-permanent mounting, the sample can be removed and the sample holder cleaned by soaking in a solution of equal parts of ethanol and toluene. Isolated coaxial and triaxial samples should be handled with care when cleaning and installing. Lake Shore does not recommend sonicating these special stages.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:43:10 GMT -5
Where exactly is the active area on a Hall probe?
The active area is the effective area over which the Hall sensor averages the magnetic field. Knowing where it is can be a concern for users who are trying to measure magnetic fields exhibiting large field gradients. It’s also important to know when performing magnet pole surface testing, where there can be a dramatic falloff of field strength near the surface. In this case, only a few thousandths of an inch difference in distance between the sensor active area and the magnet surface may change the gaussmeter reading by more than the tolerance allowed. To find out where the active area is for your specific probe, see this document. It contains schematic diagrams for each Hall probe model made by Lake Shore, indicating where the active area is in relationship to the Hall sensor on or in the stem (on transverse probes, the sensor is on the same side as the Lake Shore logo on the handle; on axial probes, which measure fields normal to their end, the sensor is embedded dead center within the stem).
The active area is also something to keep in mind when ordering probe, too. Because Hall probes measure an average magnitude over their active area, it’s important to understand the relationship between active area and field gradients. For more information on this, read the “Gradient” section of Lake Shore’s online Hall Probe Selection Guide.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:40:46 GMT -5
My USB drivers don’t update automatically using Windows 10. Do you have those drivers?
The instrument USB driver are available through Windows Update. This is the recommended method for installing the driver because it will ensure that you always have the latest version of the driver installed (when compared with the drivers included on the disk shipped with the instrument). But if you have difficulty installing a driver, for instance, when using Windows 10, the USB driver is also available as a downloadable ZIP file on the Lake Shore software download page.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:36:41 GMT -5
I am having trouble understanding how the setpoint on the temperature controller works because while it seemed to ramp fine the first time using it, it doesn’t seem to be doing it on second use. What exactly is happening?
For precision temperature control, particularly when measuring a sample that has to be warmed slowly, all Lake Shore controllers calculate the precise control output based on a temperature setpoint and feedback from the control sensor. The setpoint ramp feature specifically is used to ensure smooth, continuous changes in a temperature setpoint on a controller. It controls how fast the controller changes the setpoint value from where it is currently set to where you want it to go based on a ramp rate ranging from 0.1 K/min to 100 K/min. This feature provides for faster experiment cycles because data can be taken as the system is changing in temperature. It can also be used to make a more predictable approach to a setpoint temperature without the worry of overshoot or excessive settling times. When using a controller for the very first time, the default setpoint for the instrument is zero (0), and the user has to program in their setpoint before using the setpoint ramp function. Because ramping begins at zero, reaching the defined setpoint typically takes a bit of time. How long depends on the setpoint ramp rate entered. Usually, it is a gradual process because the instrument is having to constantly recalculate the control parameters, determining how much heater power to apply the closer it gets to a desired setpoint. But this isn’t necessarily the case with subsequent uses. If you don’t reset the setpoint to the current control sensor temperature before beginning a new setpoint ramp, reaching the setpoint can occur very quickly. This, however, can be avoided by always resetting the setpoint before beginning a new controlled ramp. For more about how to fully benefit from the setpoint feature, you might want to read this app note written by Jeff Maynard, Lake Shore Service Manager.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:33:35 GMT -5
On my temperature instrument, why is 25 ohm and 50 ohm my only choices for heater power?
The heater outputs are designed to work optimally with 25- or 50-ohm heater resistance, so they’re considered “standard” values. But heater resistance can be set for other values, too. For any resistance less than 50 ohms, use the 25 ohm setting; and for any higher heater resistance, use the 50 ohm setting. The user max. current setting is useful when using a non-standard heater resistance value. For more information about the User Max Current setting, see the table in the instrument’s user manual that provides examples of different heater resistances and max. current settings, and the resulting maximum heater power.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:31:46 GMT -5
How often should cryogenic temperature sensors be recalibrated?
There are no specific published regulations or guidelines that establish requirements for the frequency of recalibration of cryogenic sensors. There are certainly military standards for the recalibration of measuring devices. However, these standards only require that a recalibration program be established and then adhered to in order to fulfill the requirements. Many highly regarded manufacturers of more complex measuring devices, such as voltmeters, recommend that such instruments be recalibrated every six months.
Temperature sensors are complex assemblies of wires, welds, electrical connections, dissimilar metallurgies, electronic packages, seals, etc., and hence, have the potential for drift in calibration. Like a voltmeter, where components degrade or vary with time and use, all of the “components” of a temperature sensor may also vary, especially where they are joined together at material interfaces. Degradation in a sensor materials system is less apparent than deterioration in performance of a voltmeter.
Lake Shore sensor calibrations are certified for one year. Depending upon the sensor type and how it is used (see page 195 in the temperature catalog appendix for a list of environmental effects contributing to calibration degradation over time), it is recommended that sensors be recalibrated in the Lake Shore Calibration Service Department periodically. Certainly, recalibration before important experiments would be advisable.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:29:13 GMT -5
With negative temperature coefficient (NTC) temperature sensors, is higher resistance better?
For both germanium and Cernox RTDs, there is a common misconception that a higher resistance equates to a “better sensor,” if that is interpreted to mean a sensor that has better resolution or better accuracy (i.e., lower uncertainty). It is important to understand that concepts of resolution and accuracy are largely meaningless if applied only to the temperature sensor. These concepts become meaningful only when discussed in the framework of the electronics used to measure their resistance. Many instrumentation subtleties affect low temperature thermometry measurements, including the excitation mode and how the instrument switches between resistance ranges. Ultimately, the excitation level and resistance range determine the electronic resolution and accuracy which, in turn, determine the temperature resolution and accuracy. Since for a given NTC thermometer type, higher resistance implies higher sensitivities, it would be expected that the higher resistance thermometers should yield better resolution and accuracy. But as explained more thoroughly in this application note, this is not the case in general. The “best” sensor in terms of temperature resolution and accuracy is somewhat random in that it depends on which resistance range the instrument is forced to operate. Overall, the low resistance samples perform as well as the high resistance samples, and the differences that do occur are generally less than a factor of two.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:26:45 GMT -5
What is the procedure for testing an RTD sensor?
Follow this test procedure: 1. Verify RTD Place the positive (+) lead of your multimeter on I+ or V+ and the negative (-) lead on V- or I. You should measure the resistance that is expected of the sensor's temperature at the time of this test.
2. Verify sensor leads: measuring between the I and V leads - Measure the resistance between the I+ and V+ leads. You should measure the total resistance of your wire. - Measure the resistance between the V- and I- leads. You should measure the total resistance of your wire.
3. Verify sensor leads: isolation - Place one lead of your multimeter on I+ or V+ and the other lead to your system ground. You should measure an “open” (infinite resistance). - Place one lead of your multimeter on I- or V- and the other lead to your system ground. You should measure an “open” (infinite resistance).
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:24:10 GMT -5
What is the procedure for testing a diode sensor?
Ensure that your multimeter is rated to measure resistances up to 10 megohms, then follow this test procedure: 1. Verify diode (part 1) Place the positive (+) lead of your multimeter on I+ or V+ and the negative (-) lead on V- or I. You should measure approximately 5 megohms at room temperature.
2. Verify diode (part 2) Place the negative (-) lead of your multimeter on I+ or V+ and the positive (+) lead on V- or I. You should measure an “open” (infinite resistance).
3. Verify sensor leads: measuring between the I and V leads - Measure the resistance between the I+ and V+ leads. You should measure the total resistance of your wire. - Measure the resistance between the V- and I- leads. You should measure the total resistance of your wire.
4. Verify sensor leads: isolation - Place one lead of your multimeter on I+ or V+ and the other lead to your system ground. You should measure an “open” (infinite resistance). - Place one lead of your multimeter on I- or V- and the other lead to your system ground. You should measure an “open” (infinite resistance).
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:22:26 GMT -5
Where is the best place to mount a sensor as to minimize differences in temperature (gradients) between the sensor and the sample?
Temperature gradients exist because there is seldom perfect balance between the cooling source and heat sources. Even in a well-controlled system, unwanted heat sources like thermal radiation and heat conducting through mounting structures can cause gradients. So for the best temperature measurement accuracy, position sensors near the sample, so that little or no heat flows between the sample and sensor. However, you need to keep in mind that this may not be the best location for temperature control. The best control stability is achieved when the feedback sensor is near both the heater and cooling source to reduce thermal lag. And if both control stability and measurement accuracy are critical, it may be necessary for you to use two sensors—one for each function. Many temperature controllers, like the Lake Shore Model 336, have multiple sensor inputs for this very reason.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:17:55 GMT -5
Why does my temperature instrument display “S. Over,” “T. Over,” “S. Under,” or “T. Under”?
These messages indicate that the instrument cannot display a valid temperature because the sensor value exhibits one of these conditions, in most instances caused by a problem with the sensor wiring or the sensor itself: “S. Over” – the sensor value is greater than the physical limit of the input. “S. Under” – the sensor value is at zero or is negative. “T. Over” – the sensor value is beyond the highest temperature point on the curve assigned to the input (for NTC devices, this means the sensor value is lower than the value shown in the highest curve breakpoint, and for PTC devices, this means the sensor value is higher than the value shown in the highest curve breakpoint). “T. Under” – the sensor value is beyond the lowest temperature point on the curve assigned to the input (for NTC devices, this means the sensor value is higher than the value shown in the highest curve breakpoint, and for PTC devices, this means the sensor value is lower than the value shown in the highest curve breakpoint).
If these conditions display, then next step is to test to see if there is an open or short in the sensor leads and/or the isolation of the sensor has been compromised. How to do this is explained in the “procedure for testing a diode sensor” and “procedure for testing an RTD sensor” FAQs.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:14:55 GMT -5
Should I use 2 or 4 wires to attach my sensor to the instrument?
Using 4 wires to attach a sensor to any temperature instrument is always recommended. If you use 2-wire leads, the resistance of the leads adds to the actual sensor value and induces errors into the measurement. The use of 4 wires eliminates any error induced from the resistance of the leads.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:12:55 GMT -5
I am using a silicon diode and have the standard curve assigned to the input, however, my room temperature reading is off by 20 K. What is the cause of this?
You most likely have selected the wrong standard curve. Most Lake Shore temperature instruments have both the DT-470 (Curve 10) and the DT-670 standard curves included. At room temperature, the sensor value will show an approximate 20 K difference if the wrong curve is selected.
|
|
|
Post by Professor Lake Shore on Jan 31, 2017 16:10:46 GMT -5
I calibrated a Cernox sensor and entered the curve into my instrument, however, I do not see it as a curve selection when configuring the input. Why is this?
Each sensor type requires a specific curve format. In the case of a Cernox® sensor, the curve format must be in LogΩ (Base 10) vs. K. If you entered the curve in the resistance vs. temperature format, it will not appear as a selection.
|
|