I basically use an Excel sheet. Make a scatter plot of the "true" values on one axis, and the "measured (slightly wrong)" values on another axis. Then do best-fit to y=mx+b and manually adjust it according to that equation using my phone calculator in the future.
Some classically trained engineers may tell you the "true" value should always be plotted on the x-axis as it is often considered to be the more "independent" variable...but this is highly debatable, and you can skip some simple algebra later if you put the measured value on the x-axis. Then look at the shape of the scatter plot. Ideally it will be linear, so you ask Excel to do a linear curve fit (y=m*x+b). Write this on the scale, and now whenever you take a measurement on the scale, whip out your phone and do "measured_value * m + b". And that's your true value. If it's not a linear fit (quadratic, log, etc) ... that's interesting, and often it's likely "wrong", but also "it is what it is". Classically trained engineers will say you have to do a linear fit if that's what the theory says is appropriate, but for one-off home device calibration...do whatever works for you. Just as long as you don't overfit with some stupid 4, 5, 6, etc-term equation. Any reasonably simple equation with 2-3 terms is fine IMHO.
I use a set of heavy objects whose mass I know fairly precisely. They're not perfectly 10.000lbs, 20.000lbs, etc ... they're just "around 10lbs, around 20lbs" and I've used a good actually-calibrated scale (at work, some commercial business with calibrated scales that you can access, whatever) to weigh them and wrote their weights in sharpie on a piece of tape stuck to the objects. Ideally you'd go for around 10% increments. If the scale can weigh 400lbs, that would be every 40 lbs or so. But it really doesn't matter as long as you have enough good points around the range you truly intend to measure, and then a few outside of that target range at semi-regular intervals.
For my 0.1mg-resolution mass balance I have some actual calibration weights, but they're a relatively affordable OIML "M1" class, and did not come with expensive calibration certificates. The OIML tolerance ratings go E1, E2, F1, F2, M1, M2, M3 (from best to worst). For a 100g test weight, M1 precision gets you +/- 0.005g, guaranteed, for $50 ($135 if you want a calibration certificate). E1 gets you +/- 0.00005g at 100g test weight, for $500 ($1200 with cal cert). For smaller calibration weights like 10mg you'll generally want to go a step up from M1 (+/- 0.25mg) to F2 (+/- 0.08mg) for about $27.
For temperature, it's a bit trickier because the only "true" temperatures you can create are -6°F/-21°C and 228°F/109°C. If these temperatures are helpful to you, you can create them by pouring shitloads of salt in water and stirring+heating it until no more salt will dissolve and you just have a pile of salt in the bottom of the container. You can try to go for "0°C/100°C" using distilled water and it would probably be close enough but you can't know it exactly unless you use super pure de-ionized water and use extremely absurd lab technique (usually involving washing your glassware and tools with de-ionized water over and over for several days straight to get rid of trace contaminants).
So instead, to get "true" temperature in the range I care about, I use some thermocouples attached to a high-quality multimeter or oscilloscope. Then I calibrate these thermocouples using the method above, and average their reading for the oven temperature. This works and extrapolates well enough outside the range of calibration because the error of a thermocouple is basically guaranteed to be a very linear error.
In this link[0] topics 1-6 ("weeks") get into the fine details of all this and provide some worksheets/excel sheets already made up for this type of thing. If you're really getting into the weeds with this, understanding propagation of error[1] really helps but is super unnecessary for 99% of people unless they're doing actual engineering.
Some classically trained engineers may tell you the "true" value should always be plotted on the x-axis as it is often considered to be the more "independent" variable...but this is highly debatable, and you can skip some simple algebra later if you put the measured value on the x-axis. Then look at the shape of the scatter plot. Ideally it will be linear, so you ask Excel to do a linear curve fit (y=m*x+b). Write this on the scale, and now whenever you take a measurement on the scale, whip out your phone and do "measured_value * m + b". And that's your true value. If it's not a linear fit (quadratic, log, etc) ... that's interesting, and often it's likely "wrong", but also "it is what it is". Classically trained engineers will say you have to do a linear fit if that's what the theory says is appropriate, but for one-off home device calibration...do whatever works for you. Just as long as you don't overfit with some stupid 4, 5, 6, etc-term equation. Any reasonably simple equation with 2-3 terms is fine IMHO.
I use a set of heavy objects whose mass I know fairly precisely. They're not perfectly 10.000lbs, 20.000lbs, etc ... they're just "around 10lbs, around 20lbs" and I've used a good actually-calibrated scale (at work, some commercial business with calibrated scales that you can access, whatever) to weigh them and wrote their weights in sharpie on a piece of tape stuck to the objects. Ideally you'd go for around 10% increments. If the scale can weigh 400lbs, that would be every 40 lbs or so. But it really doesn't matter as long as you have enough good points around the range you truly intend to measure, and then a few outside of that target range at semi-regular intervals.
For my 0.1mg-resolution mass balance I have some actual calibration weights, but they're a relatively affordable OIML "M1" class, and did not come with expensive calibration certificates. The OIML tolerance ratings go E1, E2, F1, F2, M1, M2, M3 (from best to worst). For a 100g test weight, M1 precision gets you +/- 0.005g, guaranteed, for $50 ($135 if you want a calibration certificate). E1 gets you +/- 0.00005g at 100g test weight, for $500 ($1200 with cal cert). For smaller calibration weights like 10mg you'll generally want to go a step up from M1 (+/- 0.25mg) to F2 (+/- 0.08mg) for about $27.
For temperature, it's a bit trickier because the only "true" temperatures you can create are -6°F/-21°C and 228°F/109°C. If these temperatures are helpful to you, you can create them by pouring shitloads of salt in water and stirring+heating it until no more salt will dissolve and you just have a pile of salt in the bottom of the container. You can try to go for "0°C/100°C" using distilled water and it would probably be close enough but you can't know it exactly unless you use super pure de-ionized water and use extremely absurd lab technique (usually involving washing your glassware and tools with de-ionized water over and over for several days straight to get rid of trace contaminants).
So instead, to get "true" temperature in the range I care about, I use some thermocouples attached to a high-quality multimeter or oscilloscope. Then I calibrate these thermocouples using the method above, and average their reading for the oven temperature. This works and extrapolates well enough outside the range of calibration because the error of a thermocouple is basically guaranteed to be a very linear error.
In this link[0] topics 1-6 ("weeks") get into the fine details of all this and provide some worksheets/excel sheets already made up for this type of thing. If you're really getting into the weeds with this, understanding propagation of error[1] really helps but is super unnecessary for 99% of people unless they're doing actual engineering.
0: https://pages.mtu.edu/~fmorriso/cm3215/laboratory_exercise_s...
1: https://pages.mtu.edu/~fmorriso/Pintar_Error_Analysis_or_UO_...