"Christmas - the time to fix the computers of your loved ones" « Lord Wyrm

R300 pics !!

tombman 05.07.2002 - 22:23 2025 56
Posts

Cobase

Mr. RAM
Avatar
Registered: Jun 2001
Location: Linz
Posts: 17889
Zitat von tombman
Mehr werdens a ned hinbekommen mit 0.15µm Technik, deswegen braucht nvidia ja unbedingt 0.13 :)

Nur du kannst deinen Hintern drauf verwetten, dass der R300 in 0,13 zum selben Zeitpunkt wie die NV30 erscheinen wird, lassen doch beide Hersteller bei TSMC fertigen.

-fenix-

OC Addicted
Registered: Dec 2001
Location: Wien 21
Posts: 4650
Zitat von tombman
Aber wenn du mir den link gibst wo JC begeistert is wär des toll.

I got a 3Dlabs P10 card in last week, and yesterday I put it through its
paces. Because my time is fairly over committed, first impressions often
determine how much work I devote to a given card. I didn't speak to ATI for
months after they gave me a beta 8500 board last year with drivers that
rendered the console incorrectly. :-)

I was duly impressed when the P10 just popped right up with full functional
support for both the fallback ARB_ extension path (without specular
highlights), and the NV10 NVidia register combiners path. I only saw two
issues that were at all incorrect in any of our data, and one of them is
debatable. They don't support NV_vertex_program_1_1, which I use for the NV20
path, and when I hacked my programs back to 1.0 support for testing, an
issue did show up, but still, this is the best showing from a new board from
any company other than Nvidia.

It is too early to tell what the performance is going to be like, because they
don't yet support a vertex object extension, so the CPU is hand feeding all
the vertex data to the card at the moment. It was faster than I expected for
those circumstances.

Given the good first impression, I was willing to go ahead and write a new
back end that would let the card do the entire Doom interaction rendering in
a single pass. The most expedient sounding option was to just use the Nvidia
extensions that they implement, NV_vertex_program and NV_register_combiners,
with seven texture units instead of the four available on GF3/GF4. Instead, I
decided to try using the prototype OpenGL 2.0 extensions they provide.

The implementation went very smoothly, but I did run into the limits of their
current prototype compiler before the full feature set could be implemented.
I like it a lot. I am really looking forward to doing research work with this
programming model after the compiler matures a bit. While the shading
languages are the most critical aspects, and can be broken out as extensions
to current OpenGL, there are a lot of other subtle-but-important things that
are addressed in the full OpenGL 2.0 proposal.

I am now committed to supporting an OpenGL 2.0 renderer for Doom through all
the spec evolutions. If anything, I have been somewhat remiss in not pushing
the issues as hard as I could with all the vendors. Now really is the
critical time to start nailing things down, and the decisions may stay with
us for ten years.

A GL2 driver won't give any theoretical advantage over the current back ends
optimized for cards with 7+ texture capability, but future research work will
almost certainly be moving away from the lower level coding practices, and if
some new vendor pops up (say, Rendition back from the dead) with a next-gen
card, I would strongly urge them to implement GL2 instead of proprietary
extensions.

I have not done a detailed comparison with Cg. There are a half dozen C-like
graphics languages floating around, and honestly, I don't think there is a
hell of a lot of usability difference between them at the syntax level. They
are all a whole lot better than the current interfaces we are using, so I hope
syntax quibbles don't get too religious. It won't be too long before all real
work is done in one of these, and developers that stick with the lower level
interfaces will be regarded like people that write all-assembly PC
applications today. (I get some amusement from the all-assembly crowd, and it
can be impressive, but it is certainly not effective)

I do need to get up on a soapbox for a long discourse about why the upcoming
high level languages MUST NOT have fixed, queried resource limits if they are
going to reach their full potential. I will go into a lot of detail when I
get a chance, but drivers must have the right and responsibility to multipass
arbitrarily complex inputs to hardware with smaller limits. Get over it.

John Carmack
Bearbeitet von -fenix- am 10.07.2002, 00:45

tombman

the only truth...
Avatar
Registered: Mar 2000
Location: Wien
Posts: 9496
[QUOTE]Originally posted by -fenix-


I got a 3Dlabs P10 card in last week, and yesterday I put it through its
paces. Because my time is fairly over committed, first impressions often
determine how much work I devote to a given card. I didn't speak to ATI for
months after they gave me a beta 8500 board last year with drivers that
rendered the console incorrectly. :-)

I was duly impressed when the P10 just popped right up with full functional
support for both the fallback ARB_ extension path (without specular
highlights), and the NV10 NVidia register combiners path. I only saw two
issues that were at all incorrect in any of our data, and one of them is
debatable. They don't support NV_vertex_program_1_1, which I use for the NV20
path, and when I hacked my programs back to 1.0 support for testing, an
issue did show up, but still, this is the best showing from a new board from
any company other than Nvidia.

It is too early to tell what the performance is going to be like, because they
don't yet support a vertex object extension, so the CPU is hand feeding all
the vertex data to the card at the moment. It was faster than I expected for
those circumstances.

Given the good first impression, I was willing to go ahead and write a new
back end that would let the card do the entire Doom interaction rendering in
a single pass. The most expedient sounding option was to just use the Nvidia
extensions that they implement, NV_vertex_program and NV_register_combiners,
with seven texture units instead of the four available on GF3/GF4. Instead, I
decided to try using the prototype OpenGL 2.0 extensions they provide.

The implementation went very smoothly, but I did run into the limits of their
current prototype compiler before the full feature set could be implemented.
I like it a lot. I am really looking forward to doing research work with this
programming model after the compiler matures a bit. While the shading
languages are the most critical aspects, and can be broken out as extensions
to current OpenGL, there are a lot of other subtle-but-important things that
are addressed in the full OpenGL 2.0 proposal.

I am now committed to supporting an OpenGL 2.0 renderer for Doom through all
the spec evolutions. If anything, I have been somewhat remiss in not pushing
the issues as hard as I could with all the vendors. Now really is the
critical time to start nailing things down, and the decisions may stay with
us for ten years.

A GL2 driver won't give any theoretical advantage over the current back ends
optimized for cards with 7+ texture capability, but future research work will
almost certainly be moving away from the lower level coding practices, and if
some new vendor pops up (say, Rendition back from the dead) with a next-gen
card, I would strongly urge them to implement GL2 instead of proprietary
extensions.

I have not done a detailed comparison with Cg. There are a half dozen C-like
graphics languages floating around, and honestly, I don't think there is a
hell of a lot of usability difference between them at the syntax level. They
are all a whole lot better than the current interfaces we are using, so I hope
syntax quibbles don't get too religious. It won't be too long before all real
work is done in one of these, and developers that stick with the lower level
interfaces will be regarded like people that write all-assembly PC
applications today. (I get some amusement from the all-assembly crowd, and it
can be impressive, but it is certainly not effective)

I do need to get up on a soapbox for a long discourse about why the upcoming
high level languages MUST NOT have fixed, queried resource limits if they are
going to reach their full potential. I will go into a lot of detail when I
get a chance, but drivers must have the right and responsibility to multipass
arbitrarily complex inputs to hardware with smaller limits. Get over it.

John Carmack
[/QUOTE]

hmm, kein Wort über die perfromance, nur, daß ihm des Ding i.a. taugt.

tombman

the only truth...
Avatar
Registered: Mar 2000
Location: Wien
Posts: 9496
Zitat von Bobby Digital
Hmm? Seit wann vergleicht mann eine wildcat mit ner GF4?
Und du müsstest eigentlich wissen dass in entsprechenden brenchmarks woführ die wildcat modelle sind eine GF4 ganz schön absucken würde.
Geforce=GamerCard mehr auch nicht.

also im specviewperf. benchmark is die gf4 (natürlich die quadro Versionen, ned die gamer gf4) mit der wildcat gleich auf..

sieh her und weine :)
http://www.specbench.org/gpc/opc.data/vp7/summary.html

Die 900er quadro kostet 1400€, die 6110 wildcat kostet 2600€ -> wer is da gfiggte? ;)

Und was glaubst wird der nv30 mit der wildcat machen? richtig, den BODEN AUFWISCHEN ! :D
Bearbeitet von tombman am 10.07.2002, 01:10

Hermander

OC Addicted
Avatar
Registered: Sep 2000
Location: Vienna
Posts: 7627
300-315MHz is eh ned schwach bei der transistoren-anzahl mit 0,13µ...:rolleyes: kama sicha noch einiges rausholen mit gscheider kühlung... also 340-360MHz sollten mit gscheider kühlung (wak) drinnen sein....mem wird schätzungsweise wieder syncron sein od.?!? als um die 600-??MHz.... hmmm... ich will a R300.. :D

manalishi

tl;dr
Avatar
Registered: Feb 2001
Location: Feldkirch
Posts: 5977
die quadro ist auch nicht mehr als eine gemoddete gf4ti - aus jeder 4600er kannst du eine quadro machen tombi

-fenix-

OC Addicted
Registered: Dec 2001
Location: Wien 21
Posts: 4650
Zitat von tombman
hmm, kein Wort über die perfromance, nur, daß ihm des Ding i.a. taugt.

"It was faster than I expected for those circumstances"

"Given the good first impression, I was willing to go ahead and write a new back end that would let the card do the entire Doom interaction rendering in a single pass."

und scho allein die tatsache das erm daugt (und weils so skalierbar is) heisst er wirds gut anpassen können (und wollen)
zB die OGL2 erweiterungen
sonst würds ned so rosig ausschaun (siehe parhelia & GF4MX)

-> mehr speed
Bearbeitet von -fenix- am 10.07.2002, 13:10

Bobby Digital

Addicted
Avatar
Registered: Jun 2002
Location: Dresden
Posts: 443
Zitat von manalishi
die quadro ist auch nicht mehr als eine gemoddete gf4ti - aus jeder 4600er kannst du eine quadro machen tombi
Nun das wirst du wohl flasch in Hals bekommen haben.
Nur weil mann ne Geforce2Go zur ner Quadro machen konnte heisst das noch lange nicht dass das bei jeder Geforce funzt.
Und es gibt schon unterschiede zu Gamerforce4 und Quadro4 750XGL.

tombman

the only truth...
Avatar
Registered: Mar 2000
Location: Wien
Posts: 9496
Zitat von manalishi
die quadro ist auch nicht mehr als eine gemoddete gf4ti - aus jeder 4600er kannst du eine quadro machen tombi

zeig ma einen link wo das beschrieben wird ;)
:rolleyes:

Turrican

Legend
Amiga500-Fan
Avatar
Registered: Jul 2002
Location: Austria,Stmk.
Posts: 23260
echt geil der r300

:D

-fenix-

OC Addicted
Registered: Dec 2001
Location: Wien 21
Posts: 4650
Zitat von Bobby Digital
Nun das wirst du wohl flasch in Hals bekommen haben.
Nur weil mann ne Geforce2Go zur ner Quadro machen konnte heisst das noch lange nicht dass das bei jeder Geforce funzt.
Und es gibt schon unterschiede zu Gamerforce4 und Quadro4 750XGL.

bis jetzt hast aus einer GF256, GF2 und GF3 immer mit einem kontaktschluss/wiederstand oder ähnlichem eine quatro machen können

manche gainward karten (GF3Ti200) hatten sogar einen jumper dafür
umjumpern -> quadro

bei der R8500 is sogar noch ärger da is nur der treiber unterschiedlich
wenn ich den FireGL treiber installier sind wireframe-modelle etc. im 3ds max gleich doppelt so schnell :eek:

leider machen spiele á la Q3 damit probs (texturen fehlen etc.)

ich nehm mal an bei GF4 wirds auch der selbe chip sein weils einfach um längen billiger is als 2 verschiedene architekturen innerhalb einer baureihe

Bobby Digital

Addicted
Avatar
Registered: Jun 2002
Location: Dresden
Posts: 443
Was ich sagen wollte ist das selbst wenn du ne GF2 zur ner Quadro machst hast du noch lange keine vollwertige Quadro.
Im gegensatzt zu den gamer Karten wird an den Karten nicht so gespart und so ist z.B. Sg-DDR speicher statt SD-DDR verbaut.
Wie es mit den Chips aussieht weiss ich nicht genau denke aber auch das es in dem fall wohl eher ne Treiber Sache ist.
Kontakt | Unser Forum | Über overclockers.at | Impressum | Datenschutz