Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?
I’m curious as to whether or not there is a real difference between the
money datatype and something like
decimal(19,4) (which is what money uses internally, I believe).
I’m aware that
money is specific to SQL Server. What I want to know is if there is a compelling reason to choose one over the other; most SQL Server samples (e.g. the AdventureWorks database) use
money and not
decimal for things like price information.
Should I just continue to use the money datatype, or is there a benefit to using decimal instead? Money is fewer characters to type but that’s not a valid reason 🙂
11 Solutions collect form web for “Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?”
Never ever should you use money it is not precise and it is pure garbage, always use decimal/numeric
run this to see what I mean
DECLARE @mon1 MONEY, @mon2 MONEY, @mon3 MONEY, @mon4 MONEY, @num1 DECIMAL(19,4), @num2 DECIMAL(19,4), @num3 DECIMAL(19,4), @num4 DECIMAL(19,4) SELECT @mon1 = 100, @mon2 = 339, @mon3 = 10000, @num1 = 100, @num2 = 339, @num3 = 10000 SET @mon4 = @mon1/@mon2*@mon3 SET @num4 = @num1/@num2*@num3 SELECT @mon4 AS moneyresult, @num4 AS numericresult
Output: 2949.0000 2949.8525
To some of the people who said that you don’t divide money by money
Here is one of my queries to calculate correlations, changing that to money gives wrong results
select t1.index_id,t2.index_id,(avg(t1.monret*t2.monret) -(avg(t1.monret) * avg(t2.monret))) /((sqrt(avg(square(t1.monret)) - square(avg(t1.monret)))) *(sqrt(avg(square(t2.monret)) - square(avg(t2.monret))))), current_timestamp,@MaxDate from Table1 t1 join Table1 t2 on t1.Date = traDate group by t1.index_id,t2.index_id
SQLMenace said money is inexact. But you don’t multiply/divide money by money! How much is 3 dollars times 50 cents? 150 dollarcents? You multiply/divide money by scalars, which should be decimal.
DECLARE @mon1 MONEY, @mon4 MONEY, @num1 DECIMAL(19,4), @num2 DECIMAL(19,4), @num3 DECIMAL(19,4), @num4 DECIMAL(19,4) SELECT @mon1 = 100, @num1 = 100, @num2 = 339, @num3 = 10000 SET @mon4 = @mon1/@num2*@num3 SET @num4 = @num1/@num2*@num3 SELECT @mon4 AS moneyresult, @num4 AS numericresult
Results in the correct result:
moneyresult numericresult --------------------- --------------------------------------- 2949.8525 2949.8525
money is good as long as you don’t need more than 4 decimal digits, and you make sure your scalars – which do not represent money – are
I realise that WayneM has stated he knows that money is specific to SQL Server. However, he is asking if there are any reasons to use money over decimal or vice versa and I think one obvious reason still ought to be stated and that is using decimal means it’s one less thing to worry about if you ever have to change your DBMS – which can happen.
Make your systems as flexible as possible!
Everything is dangerous if you don’t know what you are doing
Even high-precision decimal types can’t save the day:
declare @num1 numeric(38,22) declare @num2 numeric(38,22) set @num1 = .0000006 set @num2 = 1.0 select @num1 * @num2 * 1000000
1.000000 <- Should be 0.6000000
money types are integers
The text representations of
decimal(10,4) may look alike, but that doesn’t make them interchangeable. Do you cringe when you see dates stored as
varchar(10)? This is the same thing.
Behind the scenes,
smallmoney are just a
int The decimal point in the text representation of
money is visual fluff, just like the dashes in a yyyy-mm-dd date. SQL doesn’t actually store those internally.
money, pick whatever is appropriate for your needs. The
money types exist because storing accounting values as integer multiples of 1/10000th of unit is very common. Also, if you are dealing with actual money and calculations beyond simple addition and subtraction, you shouldn’t be doing that at the database level! Do it at the application level with a library that supports Banker’s Rounding (IEEE 754)
Well, I like
MONEY! It’s a byte cheaper than
DECIMAL, and the computations perform quicker because (under the covers) addition and subtraction operations are essentially integer operations. @SQLMenace’s example—which is a great warning for the unaware—could equally be applied to
INTegers, where the result would be zero. But that’s no reason not to use integers—where appropriate.
So, it’s perfectly ‘safe’ and appropriate to use
MONEY when what you are dealing with is
MONEY and use it according to mathematical rules that it follows (same as
Would it have been better if SQL Server promoted division and multiplication of
FLOATs?)—possibly, but they didn’t choose to do this; nor did they choose to promote
FLOATs when dividing them.
MONEY has no precision issue; that
DECIMALs get to have a larger intermediate type used during calculations is just a ‘feature’ of using that type (and I’m not actually sure how far that ‘feature’ extends).
To answer the specific question, a “compelling reason”? Well, if you want absolute maximum performance in a
x could be either
MONEY will have an edge.
Also, don’t forget it’s smaller cousin,
SMALLMONEY—just 4 bytes, but it does max out at
214,748.3647 – which is pretty small for money—and so is not often a good fit.
To prove the point around using larger intermediate types, if you assign the intermediate explicitly to a variable,
DECIMAL suffers the same problem:
declare @a decimal(19,4) declare @b decimal(19,4) declare @c decimal(19,4) declare @d decimal(19,4) select @a = 100, @b = 339, @c = 10000 set @d = @a/@b set @d = @d*@c select @d
2950.0000 (okay, so at least
DECIMAL rounded rather than
MONEY truncated—same as an integer would.)
As a counter point to the general thrust of the other answers. See The Many Benefits of Money…Data Type! in SQLCAT’s Guide to Relational Engine
Specifically I would point out the following
Working on customer implementations, we found some interesting
performance numbers concerning the money data type. For example, when
Analysis Services was set to the currency data type (from double) to
match the SQL Server money data type, there was a 13% improvement in
processing speed (rows/sec). To get faster performance within SQL
Server Integration Services (SSIS) to load 1.18 TB in under thirty
minutes, as noted in SSIS 2008 – world record ETL performance, it was
observed that changing the four decimal(9,2) columns with a size of 5
bytes in the TPC-H LINEITEM table to money (8 bytes) improved bulk
inserting speed by 20% … The reason for the performance improvement is because of SQL Server’s Tabular Data Stream (TDS) protocol, which
has the key design principle to transfer data in compact binary form and as close as possible to the internal storage
format of SQL Server. Empirically, this was observed during the SSIS 2008 – world record ETL performance test using
Kernrate; the protocol dropped significantly when the data type was switched to money from decimal. This makes the
transfer of data as efficient as possible. A complex data type needs additional parsing and CPU cycles to handle than a
So the answer to the question is “it depends”. You need to be more careful with certain arithmetical operations to preserve precision but you may find that performance considerations make this worthwhile.
We’ve just come across a very similar issue and I’m now very much a +1 for never using Money except in top level presentation. We have multiple tables (effectively a sales voucher and sales invoice) each of which contains one or more Money fields for historical reasons, and we need to perform a pro-rata calculation to work out how much of the total invoice Tax is relevant to each line on the sales voucher. Our calculation is
vat proportion = total invoice vat x (voucher line value / total invoice value)
This results in a real world money / money calculation which causes scale errors on the division part, which then multiplies up into an incorrect vat proportion. When these values are subsequently added, we end up with a sum of the vat proportions which do not add up to the total invoice value. Had either of the values in the brackets been a decimal (I’m about to cast one of them as such) the vat proportion would be correct.
When the brackets weren’t there originally this used to work, I guess because of the larger values involved, it was effectively simulating a higher scale. We added the brackets because it was doing the multiplication first, which was in some rare cases blowing the precision available for the calculation, but this has now caused this much more common error.
I want to give a different view of MONEY vs NUMERICAL, largely based my own expertise and experience.. My point of view here is MONEY, cause I work with it for considerable long time and never really use NUMERICAL much..
- Native Data Type, it use native data type (integer) as same as CPU register (32 or 64 bit), so calculation doesn’t need unnecessary overhead so its Smaller and Faster.. MONEY need 8 Byte and NUMERICAL(19,4) need 9 Byte (12,5% bigger).. MONEY is faster as long as used for it mean to be (as money), how fast? my simple
SUMtest on 1 million data show that MONEY 275 ms and NUMERIC 517 ms.. that almost twice faster.. why SUM test? see next Pro point
- Best for Money, MONEY best for storing money and do operation for example in accounting, single report can run millions addition (SUM) and few multiplication after SUM operation done, for very big accounting application almost twice faster is extremely significant..
- Low Precision of Money, Money in a real life doesn’t need to be very precises, I mean many people may care about 1 cent USD, but how about 0,01 cent USD? in fact in my country bank no longer care about cent (digit after coma), I don’t know about US bank or other country..
- Limited Precision MONEY only have 4 digit (after coma) precision so it has to be converted before do operation such as division.. but then again money doesn’t need to be so precise and meant to be used as money not just number..
But.. Big but here is even your application involved real-money but not use it in lots of SUM operation like in accounting and use lots of division and multiplication instead that you should not use MONEY..
I found a reason about using decimal over money in accuracy subject.
DECLARE @dOne DECIMAL(19,4), @dThree DECIMAL(19,4), @mOne MONEY, @mThree MONEY, @fOne FLOAT, @fThree FLOAT SELECT @dOne = 1, @dThree = 3, @mOne = 1, @mThree = 3, @fOne = 1, @fThree = 3 SELECT (@dOne/@dThree)*@dThree AS DecimalResult, (@mOne/@mThree)*@mThree AS MoneyResult, (@fOne/@fThree)*@fThree AS FloatResult
Just test it and make your decision.
I just saw this entry…
which basically says that money has a precision issue….
declare @m money declare @d decimal(9,2) set @m = 19.34 set @d = 19.34 select (@m/1000)*1000 select (@d/1000)*1000
For money type, you will get 19.30 instead of 19.34. I am not sure if there is application scenario that divides money into 1000 parts for calculation but this example does expose some limitations.
You shouldn’t use money when you need to do multiplications / divisions on the value. Money is stored in the same way an integer is stored, whereas decimal is stored as a decimal point and decimal digits. This means that money will drop accuracy in most cases, while decimal will only do so when converted back to its original scale. Money is fixed point, so its scale doesn’t change during calculations. However because it is fixed point when it gets printed as a decimal string (as opposed to as a fixed position in a base 2 string), values up to the scale of 4 are represented exactly. So for addition and subtraction, money is fine.
A decimal is represented in base 10 internally, and thus the position of the decimal point is also based on the base 10 number. Which makes its fractional part represent its value exactly, just like with money. The difference is that intermediate values of decimal can maintain precision up to 38 digits.
With a floating point number, the value is stored in binary as if it were an integer, and the decimal (or binary, ahem) point’s position is relative to the bits representing the number. Because it is a binary decimal point, base 10 numbers lose precision right after the decimal point. 1/5th, or 0.2, cannot be represented precisely in this way. Neither money nor decimal suffer from this limitation.
It is easy enough to convert money to decimal, perform the calculations, and then store the resulting value back into a money field or variable.
From my POV, I want stuff that happens to numbers to just happen without having to give too much thought to them. If all calculations are going to get converted to decimal, then to me I’d just want to use decimal. I’d save the money field for display purposes.
Size-wise I don’t see enough of a difference to change my mind. Money takes 4 – 8 bytes, whereas decimal can be 5, 9, 13, and 17. The 9 bytes can cover the entire range that the 8 bytes of money can. Index-wise (comparing and searching should be comparable).