|
Post by Lake Shore Ryan on Jan 3, 2019 16:11:34 GMT -5
Hi Alexis,
Great to hear you wrote a script to record temperature values! Would you mind sharing the code for future users to start from? I also just finished a very unofficial Python script adapted from scripts we have for some of our newer instruments that could be useful for this purpose too. Doubt you'll want to use it given you already have something working, but maybe someone in the future will find it useful .
""" Script that repeatedly requests temperature values (K) from a 240 Series temperature module that is connected to a computer via a USB cable.
The classes below handle the specifics of detecting, automatically connecting to and communicating with a 240 Series module. Please note that these classes only enable a very small subset of functions on a 240 Series module. Specifically, requesting Kelvin temperature values for all enabled channels. Note that this configuration is not capable of supporting simultaneously connected 240 Series modules. Only one 240 Series modules can be interrogated at once. """
import serial from serial.tools.list_ports import comports from time import time from time import sleep
class SerialConnection: def __init__(self): self.connect()
def connect(self): # Scan ports for device and create serial port
valid_id_combos = [(0x1FB9, 0x0205)] # 240 Module serial_params = {'baudrate': 115200, 'timeout': 2, 'parity': serial.PARITY_NONE}
# Conduct port scan for port_name in comports(): if (port_name.vid, port_name.pid) in valid_id_combos: self.serial = serial.Serial(port_name.device, **serial_params) break else: raise Exception("""Cannot find device. Look at device manager and make sure the COM port is there. Also make sure that MeasureLINK for the 240 Series is not currently connected to the 240 module.""")
def disconnect(self): # Frees the COM port associated with the device
self.serial.close()
def command(self, command_string): # Issue command to device
# write the command out with the terminator self.serial.write(command_string.encode('ascii') + b'\n')
def query(self, query_string): # Send query to device and return response
# send the query string self.command(query_string)
# Return response with terminators stripped return self.serial.readline().decode('ascii').rstrip('\r\n')
class Series240: def __init__(self): self.serial_connection = SerialConnection()
def identify_model(self): # Returns a 2 if the connected unit is a 240-2P and an 8 if the unit is a 240-8P
# Looks inside the product part number for the term that identifies it as a 2 or 8 channel unit return int(self.serial_connection.query("*IDN?")[14])
def enabled_channels(self): # Scans through all channels in the unit and returns a list of channels that are enabled
channels = self.identify_model() # determine whether to scan 2 or 8 channels enabled = []
for channel in range(1, channels + 1): #scan through the channels in the unit
# request the input type for the channel and check to see if it is enabled if(self.serial_connection.query("INTYPE? " + str(channel))[10]=="1"): enabled.append(int(channel)) # add this channel to the list of enabled channels return enabled
def generate_header(self): # Returns a comma separated string that includes the channels that are currently set to enabled # For example: "Time, Ch1, Ch2, Ch4" would be returned if channels 1, 2 and 4 were enabled
query_channels = self.enabled_channels() # determine which channels are enabled for queries header_string = "Time" # A timestamp will be added as the first term of the queried temeprature values
# Add a comma separated channel number for each enabled channel on the device for channel in query_channels: header_string = header_string + ", Ch" + str(channel) return header_string
def query_temperature_all(self): # Return a numberic list of temperature values in Kelvin for all enabled channels
query_channels = self.enabled_channels() # determine which channels are enabled for queries readings = []
#request temperature value for each channel, convert to float and append to the list of readings for channel in query_channels: readings.append(float(self.serial_connection.query("KRDG? " + str(channel)))) return readings
def disconnect(self): # Frees the COM port associated with the device
self.serial_connection.disconnect()
"""=============== USER SCRIPT BEGINS HERE ==================
The script below sends a request for temperature values at a time interval approximated by 'INTERVAL' for a time period determined by 'DURATION'. The actual time interval will be slightly larger than what is set by 'INTERVAL'
Values will be output to screen and logged to a file defined by 'FILENAME'. The text has been formatted specifically for the csv filetype, so it is recommended to leave the file type the same if changing 'FILENAME'.
================================================================"""
# User modifiable variables INTERVAL = 0.1 # approximate time step in seconds. The actual time step will be slightly larger DURATION = 10 # amount of time in seconds to log values FILENAME = "log240.csv"
log_file = open(FILENAME, 'w') connection = Series240()
# Create the header for the log file. Includes all currently enabled measurement channels header = connection.generate_header() print(header) # Print header to screen log_file.write(header + "\n") # Save header to the log file
start = time() # Time in seconds since epoch that measurements begin stop = start + DURATION # Calculate when the measurements should end current = start # Set current time to the start time
# Print and log all temperature values until the stop time is reached while current < stop: current = time() # Create a new timestamp values = connection.query_temperature_all() # Query all temperature values from the unit # Create a string of values that can be displayed on the screen and saved to the log file value_string = '%.3f' % (current - start) + "," + ",".join(str(x) for x in values) print(value_string) # Print string of values log_file.write(value_string + "\n") # Save string of values to the log file
sleep(INTERVAL) # Wait for the defined amount of time log_file.close() # Close the log file connection.disconnect() # Disconnect from the serial interface
|
|
|
Post by Lake Shore Ryan on Oct 10, 2018 16:26:55 GMT -5
No problem. We made changes to our website too, so hopefully this won't be so confusing for the next person viewing that page. Best of luck with your field measurements!
|
|
|
Post by Lake Shore Ryan on Oct 9, 2018 10:48:47 GMT -5
OK, thanks! If you look at that page, it says that the 475 does not use the Hallcal.exe software:
Please have a look at section 5.2 of the manual to see how to program a HMCBL cable using the 475. When using a separate sensor like you have, the only things to enter are the sensor excitation (100 mA for InAs sensors) and a single sensitivity value. Hope this helps. Let me know.
|
|
|
Post by Lake Shore Ryan on Oct 9, 2018 8:27:45 GMT -5
Hi, what are you hoping to do with the HallCal software? This software was used for older gaussmeters and I don't think is needed for the 475. If you are trying to connect a discrete Hall sensor to a HMCBL cable for use with a 475, read section 5.2 of the manual to see how to program it using the 475. If this solves your problem, could you please let me know? Also, what did you read that pointed you towards the HALLCAL.EXE software? Thanks!
|
|
|
Post by Lake Shore Ryan on Apr 16, 2018 16:26:15 GMT -5
I can maybe provide a little additional information here. If you use our high-reliability version of the CX-1080 sensors, there is an option where we calibrate the sensor at a fixed 10 µA as many flight programs use this type of simplified circuitry. That way, self-heating is built into the calibration.
The importance of this will depend on your operating temperature, as you would only begin to see self-heating when the sensor goes much higher than 1 kΩ in resistance, resulting in a voltage greater than 10 mV. For CX-1080s, this happens at around 100 K depending on the particular sensor. Either way, our calibrations extend down to 20 K for these sensors.
As Ogi said, your other option would be to pick a custom excitation that will allow you to stay under 10 mV of signal at your lowest expected temperature and then you wouldn't have to worry about self-heating at all. Then it just becomes a matter of determining whether the associated loss of resolution at higher temperatures is acceptable.
If you'd like to know more about our HR Cernox Series, please reach out to us directly, as flight programs have security requirements that we need to adhere to.
|
|
|
Post by Lake Shore Ryan on Apr 9, 2018 13:24:28 GMT -5
Sorry, I missed your last post. I provided a link in my last post for the fiber-optic instrument. If you click on "FOB100" in that post it will take you to Omega's website where you can see pricing.
|
|
|
Post by Lake Shore Ryan on Mar 12, 2018 17:29:33 GMT -5
Well, at that temperature range and accuracy level, you'd have a lot of different options. The problem you might run into for most of these sensors will be inductive pickup in your temperature measurement leads, since most of these rely on a generated voltage value to determine temperature. Our Model 372 combined with a platinum RTD might be a solution due to its balanced current source that can reject environmental noise. However, I worry that even the 372 might not be able to reject a signal large enough to accomplish inductive charging. You might want to look for something based on fiber-optics that won’t be affected by the large EM field. I found the FOB100 from Omega. This might be a good place to start. It also has a smaller price tag than our Model 372. If you can, let me know what you end up trying. I’m curious to know what solution you end up with.
|
|
|
Post by Lake Shore Ryan on Mar 12, 2018 9:40:01 GMT -5
Hi,
What field strength values would you be expecting for the sensor? And what is your desired temperature accuracy for this measurement. I just want to check too, is the maximum temperature in Celsius?
|
|
|
Post by Lake Shore Ryan on Jan 5, 2018 18:39:43 GMT -5
Hi Jeff,
Nice idea starting out with a bench test. Using the platinum sensor will give you experience wiring up your connector in a 4-lead configuration like Jeff M suggested (see section 3.3.2.5 of the manual) and will give you experience configuring the 218 for that sensor. I'd suggest connecting the sensor to the Input 1 pins (pins 3, 4, 15, 16) of the 25-pin connector (see section 3.3.2.1 of the manual).
Once physically connected, it's going to be easy to set up since the platinum sensor follows a standard curve and you only need to set the Sensor Input Type to 250 Ohm Plat (see section 4.5 of the manual). As soon as you do this for Input 1, your 218 should now be reporting the resistance of the sensor, which would be around 110 Ω at room temperature.
The next step will be to tell the instrument that this is a PT-100 series sensor so that it can report in temperature units (kelvins by default). Section 4.6 of the manual will show you how to do this, you'll want to select Curve Number 6 so that PT-100 is displayed on the screen. Once you've assigned this curve to input 1, your home screen should now be showing a room temperature reading in Kelvin. Now you're ready for the big league.
Cernox sensor are a little more complicated unfortunately, they don't use a standard curve so you'll need to load the sensor's calibration curve onto the instrument. Before you do this though, you should change the Input type (section 4.5) to Cernox. So the 218 supplies the sensor with the correct excitation.
Now you'll need to use our Curve Handler software (free) to load your calibration curve onto the instrument. Hopefully you have something on hand to communicate with an RS-232 DB9 serial port. The 218 is one of our older instruments, so it's not very user friendly to get this done. To quote the manual:
Section 4.6 Curve Select: User curves must be stored in the same location number as the sensor input. Once an appropriate user curve stores for a sensor input, it can be selected just like standard curves, but it can be used for only one input. The curve format you'll want to use is the .340 file that should have come with your calibrated Cernox sensor. Let me know if you have an issue with any of this. It definitely confused me the first time I tried to use a 218.
Use the process in section 4.6 again to select this new user curve for input 1, then you should be back to seeing a real sensor temperature on your front screen. At this point, I'd suggest reading chapter 2 of the manual for a primer on working with cryogenic environments if you don't have that background already.
Just a quick warning too: Take a look at your Cernox calibration document and make sure that the Cernox sensor doesn't exceed 7500 Ω at the temperatures you're expecting to measure. If you're measuring a low temperature superconductor, you might be seeing temperatures that would cause some Cernox sensors to overload the Model 218 monitor. If this is the case, I'd suggest getting a calibrated DT-670 silicon diode sensor instead. They can measure down to 1.4 K on the 218 and will generally cost less than a Cernox. They just aren't any good in magnetic fields.
Hopefully this is helpful in getting you going with the 218. It's hard to give a thorough run down of our instruments in a single forum post. Please let me know how you go, I'm writing this all without a 218 running next to me, so hopefully I'm not leading you astray with my suggestions .
|
|
|
Post by Lake Shore Ryan on Dec 28, 2017 0:02:20 GMT -5
Hi,
I am assuming that you are referring to magnetic field offsets in silicon diode sensors?
The error table contains negative numbers because these sensors act as Hall Effect devices when a magnetic field is applied. The sensor voltage increases, resulting in a reduced equivalent temperature since the sensor is a negative temperature coefficient device. At low temperatures and high magnetic fields, the voltage increases so much that it exceeds the maximum standard voltage that the diode would normally produce.
Parallel in the case of the SD package means that the field is in the same plane as the leads. Perpendicular means it is at a right-angle to the leads, running from the mounting base of the sensor, through the face.
For the CU bobbin, this becomes more difficult to explain because the SD package is embedded inside the CU bobbin. Let me know if you really need to know the orientation of the sensor element inside the CU bobbin.
Hope this helps.
|
|
|
Post by Lake Shore Ryan on Dec 5, 2017 10:47:52 GMT -5
Thanks for the question. In this case, accuracy is specified in ohms, so the ±0.02% or reading is referring to the resistance reading of the platinum RTD. At 30 K, a platinum RTD is typically 3.66 ohms, so the total accuracy value should be:
Accuracy(Ω) = ±0.06 ±0.02% of reading = ±0.06 ±0.02% of 3.66 = ±0.06 ±0.0007 = ±0.0607 Ω
Once you have the total accuracy in ohms, you can convert to an equivalent temperature using the sensitivity value of that sensor. in the case of the Platinum RTD at 30 K, this is 0.191 Ω/K
Accuracy(K) = Accuracy(Ω)/Sensitivity(Ω/K) = ± 0.0607/0.191 = ± 0.318 K
So you can generate your own temperature-based accuracy and resolution numbers from any sensor, provided you know its resistance (or voltage in the case of diodes) and sensitivity values. If you're just looking for indicative values, our sensor response tables will be useful to you.
Hope this helps.
|
|
|
Post by Lake Shore Ryan on Aug 27, 2017 7:32:49 GMT -5
Thanks Bo,
I'll keep this in mind for future controllers. Do you think programmable output voltage rails would be more or less useful than the ability for the temperature controller to drive Peltier elements directly? What sort of current levels are you producing with your op amp circuit? Thanks!
|
|
|
Post by Lake Shore Ryan on May 3, 2017 10:42:17 GMT -5
You're welcome. Please feel free to report back on whether this was implemented successfully.
|
|
|
Post by Lake Shore Ryan on Apr 27, 2017 12:17:33 GMT -5
OK, sorry I didn't realize you wanted it to be software programmable. There is a sneaky trick that might work for you. It involves messing with the 336 calibration gain settings though, so we don't publish information on how to do this in the product manual as some of these commands can ruin the ability of the instrument to make measurements and would require it to be sent back to us for recalibration. The particular command below shouldn't cause any long-term issues with your instrument though, as any changes you make to the calibration settings would be reset when the 336 is power cycled.
The terminal command uses the same structure as the commands shown in Section 6.6 of the 336 product manual and is: CALG <channel>, 0, <value> (sets calibration gain) CALZ <channel>, 0, <value> (sets calibration offset) where: <channel> = 6 (for Output 3) <channel> = 7 (for Output 4)
<value> is the gain constant or zero offset applied to the output (ranges from 0 to 1)
So first you will want to query the output to know what you gain constant and offset is at full scale. The examples I show will be for Output 3.
CALG? 6, 0
Your CALG value corresponds to the value required for 20 V of range from -10 to +10 V. Scale this CALG number to create a new full scale range. e.g. if you want +/-6V, then the full scale range will be 12 V and your gain factor should be set to 60% (12/20) of what it currently is.
E.g. CALG 6, 0, 0.6 (assuming my CALG value was 1.0 to begin with)
Now your output range will be around -10 V to +2 V (12 V full range). The next step is to change the offset to shift the max and min outputs to be symmetrical. As a side note, you don't have to have these values be symmetrical, but if you don't be aware that for Outputs 3 and 4, "off" is just the mid-point of full scale, which would normally be 0 V. In this example, "off" or 0% would see the instrument generating a -4V output.
The offset factor for this scenario can be calculated using: 0.5 - <full range>/40 In this example: 0.5 - 12/40 = 0.2
So you would enter CALZ 6, 0, 0.2
This is where you should check the actual output with a voltmeter. These calculations used to generate these numbers won't result in a perfectly centered bipolar range. You'll need to make slight trial-and-error adjustments to CALZ to get the setting for 0% or "off" to produce 0 V. For me and unit I have here, I had to shift my CALZ value from 0.2 to 0.2084 to get a 0.00 V reading.
The good news is that once you have a set of CALG and CALZ values that you're happy with for a given range, you can program these into your code to modify the range whenever you want. Just remember that these settings are not permanent and the instrument will revert back to its original calibration values (and +/-10 V output) when you turn the 336 off.
Hope this is a better solution for you. Please let me know how this goes, or if you have any other questions.
|
|
|
Post by Lake Shore Ryan on Apr 26, 2017 16:15:30 GMT -5
Hi Bo,
Sounds like a tricky balancing act you have there. Is your current supply not capable of applying a custom amount of gain? I'm guessing not. Do you mind sharing the model of the current supply you're using?
Unfortunately, adding custom output ranged to our existing instruments would take up too much development time and increase the complexity of our instrument calibrations.
A quick and easy fix for this problem might just be a simple voltage divider (see below). As long as the voltage input on your current supply is a high-impedance input, the formula I've included in the image should hold true.
The 1 kΩ minimum resistance is what is required for the analog output to produce its 10 V output. Also, this is a low-power output, so standard 1/4 W resistors will be fine for this application.
If you'd like to vary your maximum output voltage, you could use a variable resistor (potentiometer) for R1. In this scenario, I'd suggest: R1: 10 kΩ potentiometer R2: 1.2 kΩ 10% tolerance (or better) resistor.
This would give you an adjustable VOUT from around 1 V (when the potentiometer is set to 10 kΩ) to 10 V (when the potentiometer is set to 0 Ω).
Hope this helps. Let me know.
Regards,
|
|