c# - How does decimal work? -
i looked @ decimal in c# wasnt 100% sure did. lossy? in c# writing 1.0000000000001f+1.0000000000001f
results in 2
when using float
(double
gets 2.0000000000002
correct) possible add 2 things decimal , not correct answer?
how many decimal places can use? see maxvalue 79228162514264337593543950335
if subtract 1 how many decimal places can use?
are there quirks should know of? in c# 128bits, in other language how many bits , work same way c# decimal does? (when adding, dividing, multiplication)
what you're showing isn't decimal
- it's float
. they're different types. f
suffix float
, aka system.single
. m
suffix decimal
, aka system.decimal
. it's not clear question whether thought using decimal
, or whether using float
demonstrate fears.
if use 1.0000000000001m + 1.0000000000001m you'll right value. note double
version wasn't able express either of individual values exactly, way.
i have articles on both kinds of floating point in .net, , should read them thoroughly, along other resources:
- binary floating point (float/double)
- decimal floating point (decimal)
all floating point types have limits of course, in particular should not expect binary floating point accurately represent decimal values such 0.1. still can't represent isn't representable in 28/29 decimal digits though - if divide 1 3, won't exact answer of course.
you should note range of decimal
considerably smaller of double
. while can have 28-29 decimal digits of precision, can't represent huge numbers (e.g. 10200) or miniscule numbers (e.g. 10-200).
Comments
Post a Comment