• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Measurement and Review of Schiit BiFrost Multibit DAC

OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
What I wonder about regarding the multibit and their claims of "time and frequency optimized closed form digital filter" is the time-domain optimization. This seems somewhat akin to what Chord claim from their approach to using FPGA and putting in tens of thousands of taps. How can this be measured? I guess not using FFT, since that only shows frequency reproduction, noise, and jitter/distortion. Somehow you'd have to measure the timing of an impulse response, and how that compared across different DACs
Before we can devise a test, we need to determine what they have actually done beyond techno-marketing terms they use. I once spent a few hours looking and all I found were some vague words and nothing else. If someone has the actual technical explanation of what they say they have done, I love to see it and we can go from there.

On Chord, we can actually measure what they have done using much more sensitive instrumentation to see the response of the filter since they mostly talk about number of taps in FIR filters. That level of detail simply does not exist with respect to Schiit's filtering.
 

Dana reed

Active Member
Joined
Mar 13, 2018
Messages
244
Likes
245
Before we can devise a test, we need to determine what they have actually done beyond techno-marketing terms they use. I once spent a few hours looking and all I found were some vague words and nothing else. If someone has the actual technical explanation of what they say they have done, I love to see it and we can go from there.

On Chord, we can actually measure what they have done using much more sensitive instrumentation to see the response of the filter since they mostly talk about number of taps in FIR filters. That level of detail simply does not exist with respect to Schiit's filtering.
It seems that they're not claiming high bit depth resolution (which obviously yours and others' measurements show), but rather the specific timing of signals and phase information, within the constraints of 12-16 bits or whatever of resolution. Whether this is important probably depends on whether it's a recording of a live performance with various microphone placement, or standard multitracked mixed and mastered audio.
About the 5th question down is as much as I've seen as explanation regarding the DSP filter and its time-domain optimization. Perhaps measurable by generating multiple out of phase signals and trying to discern them in the time domain?
https://www.audiostream.com/content/qa-mike-moffat-schiit-audio
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,425
Likes
3,979
Location
SoCal
Perhaps plotting a phase response as a function of signal frequency can be an interesting metric?
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
Thanks. Alas, it is still full of nonsense. Take this part:

upload_2018-3-21_12-3-43.png


So first we are told something is "fairly" bit perfect. Fairly? If something is bit-perfect, then it can't be an approximation. You can't bastardize the term that way.

It then gets worse in the next paragraph when he talks about "interpolated samples." How the heck interpolated samples are "bit exact" and what goes in is what goes out???

If this is a FIR filter, then all the new samples are generated based on N number of samples around it. There is no notion of bit-perfect anything. You are filtering and the point of filtering is to modify the samples to get rid of their high frequency components. How do you land with bit-perfect?

On the math, that is a theoretical solution to the filter. Once implemented in a DSP, approximations are almost always made and there, the accuracy is no longer there. This is what I meant by "resolution."

Anyone else can make sense out of what he is saying?
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,425
Likes
3,979
Location
SoCal
I think what he is saying that if 10 samples come in and 40 samples come out, those 10 samples will be among the 40 samples, which is not necessarily the case with FIR filters.
 

FrivolsListener

Active Member
Joined
Feb 14, 2018
Messages
114
Likes
19
Thanks. Alas, it is still full of nonsense. Take this part:

View attachment 11561

So first we are told something is "fairly" bit perfect. Fairly? If something is bit-perfect, then it can't be an approximation. You can't bastardize the term that way.

It then gets worse in the next paragraph when he talks about "interpolated samples." How the heck interpolated samples are "bit exact" and what goes in is what goes out???

If this is a FIR filter, then all the new samples are generated based on N number of samples around it. There is no notion of bit-perfect anything. You are filtering and the point of filtering is to modify the samples to get rid of their high frequency components. How do you land with bit-perfect?

On the math, that is a theoretical solution to the filter. Once implemented in a DSP, approximations are almost always made and there, the accuracy is no longer there. This is what I meant by "resolution."

Anyone else can make sense out of what he is saying?

I think the commas are missing: "to, fairly, as bit perfect". Whether it is or not is another issue. I don't have the test equipment to measure that.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,834
Likes
16,496
Location
Monument, CO
Fairly bit-perfect.
Partly pregnant.

I learned about IIR and FIR filters decades ago, some are approximations, some have closed-form solutions, some do not. A college digital filter class might help sort that out. Or any one of the plethora of filter books.

Still pondering on averaging phase in the time domain and correcting in the filter... Averaging normally throws away some details, what is their reference for correction?

Interpolation means the original samples are still there at either end with the interpolated samples between. Does not mean much without knowing exactly how they are interpolating. Every oversampling converter must interpolate somehow unless they zero-pad the data.
 

mindbomb

Active Member
Joined
Nov 11, 2017
Messages
284
Likes
175
Isn't preserving the original samples a computational shortcut? I feel like it would have been harder to make a filter that doesn't necessarily always preserve the original samples.

Also, my take on what schiit means when they talk about closed form filtering is that they are using lookup tables rather than doing processing locally.
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,834
Likes
16,496
Location
Monument, CO
Isn't preserving the original samples a computational shortcut? I feel like it would have been harder to make a filter that doesn't necessarily always preserve the original samples.

Also, my take on what schiit means when they talk about closed form filtering is that they are using lookup tables rather than doing processing locally.

Not quite sure the question so apologies if this is beneath you...

The output sample of a FIR filter at any point in time can be thought of as a weighted average of past, present, and "future" samples (future by virtue of on-chip delays that allow the sample to include pre- and post-cursor terms, the cursor being the sample we are looking at, at the cost of latency in the filter). Thus, in general, the actual raw input sample is not preserved. But it could be...

A basic interpolator, not necessarily a FIR filter though that is one way to do it, could output the raw input sample, then interpolated values, then the next raw sample and so forth with a delay equal to the number of interpolated samples plus any additional processing along the way. So could be considered a "shortcut" compared to more complex filtering/interpolation but of course I do not know exactly how they are performing the interpolation.

Mathematically "closed-form" means a closed-form solution to the filter equations. Look-up tables are an implementation scheme that may or may not imply a closed-form filter solution but look-up tables do set the resolution of the filter coefficients by the size, accuracy, and precision of the tables. That obviates a bunch of multipliers and such but I do not think look-up tables mean the filter is closed-form. My guess would be they are using filters that have closed-form solutions, which may or may not matter in the real world, but they could also be talking about a particular class of DSP filter implementations that are considered closed-form.

But I could well be wrong. Been too long since my grad filter classes to say more off-the-cuff and this is outside my area of expertise (though I have spent a lot of time in the filter arena and designed a number of them, analog and digital, over the years).

FWIWFM - Don
 

Jimster480

Major Contributor
Joined
Jan 26, 2018
Messages
2,880
Likes
2,032
Location
Tampa Bay
The marketing test of course!

Don't need no fancy equipment, just a bunch of people in a room, blue-sky thinking, getting all their ducks in a row. Granularity isn't the space they are moving forward in- it's all about picking the low hanging fruit and cascading down to a product opening and target demographic that thrives on that type of bullschiit.

Seriously, looking under the hood is not the conversation we need right now- I say, just sprinkle the magic, and as always, my door is open on this if we need to get things moving a little faster through the sales and delivery pipeline.

:)
Thats how it happens in too many industries these days... and there are too many of us who realize it.
 

trl

Major Contributor
King of Mods
Joined
Feb 28, 2018
Messages
1,967
Likes
2,523
Location
Iasi, RO

derp1n

Senior Member
Joined
May 28, 2018
Messages
479
Likes
629
Rumor has it that the Bifrost MB has been silently revised (along with the Gungir MB) sometime this year. Not sure if that means analog board changes like the Yggdrasil received, but it'll at least include the revised firmware with dither to address some of the poor measurements caused by truncation plus the "Gen 5" USB board. Something to be aware of if you see Bifrost measurements that are better than Amir's results.

No upgrades offered for customers who purchased this "upgradable" DAC though.
 

derp1n

Senior Member
Joined
May 28, 2018
Messages
479
Likes
629
Found another review: https://www.avhub.com.au/product-reviews/hi-fi/schiit-bifrost-multibit-dac-review-test-497099.
Reviewer found a lower noise when using Mac instead of Win (about 5dB), also a lower noise when using optical instead USB. However, hum noise and harmonics were 20-30dB higher than average noise-floor
The pictures show a Gen 2 USB board which may explain the poorer performance via USB, but they look like badly rescaled PR pics from Schiit. If it was a brand new unit at the time of review (March 2018) I'd expect it to have Gen 5 USB, but the Australian distributors probably have older stock.
 

Jimster480

Major Contributor
Joined
Jan 26, 2018
Messages
2,880
Likes
2,032
Location
Tampa Bay
Top Bottom