Apple's 10.2-inch iPad drops to $249 in an early Black Friday deal

This is the best price we’ve seen all year.

Apple’s 9th-gen entry-level iPad is on sale as part of an early Black Friday Amazon deal. You can snag the tablet for $249, instead of the usual price of $329. That’s a savings of $80 and constitutes a discount of nearly 25 percent. This is the best price we’ve seen all year for Apple’s tablet.

This is the standard 10.2-inch iPad design that’s been around since, well, forever. Despite lacking some of the more advanced features of the iPad Pro and some of the portability of the iPad Air, this model still offers plenty of bang for your buck. There’s a reason, after all, that it made our list of best tablets in 2023, even with stiff competition.

Apple

Apple iPad (9th Generation)


This is the best deal all year on this entry-level iPad.

This model ships with 64GB of storage, an A13 Bionic chip and a decent battery that lasts a full day of use before requiring a trip to the outlet. The speakers are a bit janky but, wait for it, the 9th-gen iPad actually has a bona-fide headphone jack. The cameras are nothing spectacular, but tablets have never been on the cutting edge of image-capturing tech, due to their increased size when compared to smartphones.

The A13 Bionic chip is capable but lacks some of the oomph of Apple’s newer chipsets, like the M1 and above. Even with the relatively ancient chipset, this iPad boasts more than enough speed for casual tasks. It also integrates with first-party accessories like Apple’s Smart Keyboard and the first-gen stylus. At $249, this is a great deal for those looking for a simple tablet to watch movies on and peruse the web.

NVIDIA's Eos supercomputer just broke its own AI training benchmark record

The system can train a 175 billion parameter GPT-3 model in under four minutes. It needs just 7.2 seconds for BERT.


Depending on the hardware you're using, training a large language model of any significant size can take weeks, months, even years to complete. That's no way to do business — nobody has the electricity and time to be waiting that long. On Wednesday, NVIDIA unveiled the newest iteration of its Eos supercomputer, one powered by more than 10,000 H100 Tensor Core GPUs and capable of training a 175 billion-parameter GPT-3 model on 1 billion tokens in under four minutes. That's three times faster than the previous benchmark on the MLPerf AI industry standard, which NVIDIA set just six months ago.

Eos represents an enormous amount of compute. It leverages 10,752 GPUs strung together using NVIDIA's Infiniband networking (moving a petabyte of data a second) and 860 terabytes of high bandwidth memory (36PB/sec aggregate bandwidth and 1.1PB sec interconnected) to deliver 40 exaflops of AI processing power. The entire cloud architecture is comprised of 1344 nodes — individual servers that companies can rent access to for around $37,000 a month to expand their AI capabilities without building out their own infrastructure.

In all, NVIDIA set six records in nine benchmark tests: the 3.9 minute notch for GPT-3, a 2.5 minute mark to to train a Stable Diffusion model using 1,024 Hopper GPUsa minute even to train DLRM, 55.2 seconds for RetinaNet, 46 seconds for 3D U-Net and the BERT-Large model required just 7.2 seconds to train.

NVIDIA was quick to note that the 175 billion parameter version of GPT-3 used in the benchmarking is not the full-sized iteration of the model (neither was the Stable Diffusion model). The larger GPT-3 offers around 3.7 trillion parameters and is just flat out too big and unwieldy for use as a benchmarking test. For example, it'd take 18 months to train it on the older A100 system with 512 GPUs — though, Eos needs just eight days.

So instead, NVIDIA and MLCommons, which administers the MLPerf standard, leverage a more compact version that uses 1 billion tokens (the smallest denominator unit of data that generative AI systems understand). This test uses a GPT-3 version with the same number of potential switches to flip (s the full-size (those 175 billion parameters), just a much more manageable data set to use in it (a billion tokens vs 3.7 trillion).

The impressive improvement in performance, granted, came from the fact that this recent round of tests employed 10,752 H100 GPUs compared to the 3,584 Hopper GPUs the company used in June's benchmarking trials. However NVIDIA explains that despite tripling the number of GPUs, it managed to maintain 2.8x scaling in performance — an 93 percent efficiency rate — through the generous use of software optimization.

"Scaling is a wonderful thing," Salvator said."But with scaling, you're talking about more infrastructure, which can also mean things like more cost. An efficiently scaled increase means users are "making the best use of your of your infrastructure so that you can basically just get your work done as fast [as possible] and get the most value out of the investment that your organization has made."

The chipmaker was not alone in its development efforts. Microsoft's Azure team submitted a similar 10,752 H100 GPU system for this round of benchmarking, and achieved results within two percent of NVIDIA's.

"[The Azure team have] been able to achieve a performance that's on par with the Eos supercomputer," Dave Salvator Director of Accelerated Computing Products at NVIDIA, told reporters during a Tuesday prebrief. What's more "they are using Infiniband, but this is a commercially available instance. This isn't some pristine laboratory system that will never have actual customers seeing the benefit of it. This is the actual instance that Azure makes available to its customers."

NVIDIA plans to apply these expanded compute abilities to a variety of tasks, including the company's ongoing work in foundational model development, AI-assisted GPU design, neural rendering, multimodal generative AI and autonomous driving systems.

"Any good benchmark looking to maintain its market relevance has to continually update the workloads it's going to throw at the hardware to best reflect the market it's looking to serve," Salvator said, noting that MLCommons has recently added an additional benchmark for testing model performance on Stable Diffusion tasks. "This is another exciting area of generative AI where we're seeing all sorts of things being created" — from programming code to discovering protein chains.

These benchmarks are important because, as Salvator points out, the current state of generative AI marketing can a bit of a "Wild West." The lack of stringent oversight and regulation means, "we sometimes see with certain AI performance claims where you're not quite sure about all the parameters that went into generating those particular claims." MLPerf provides the professional assurance that the benchmark numbers companies generate using its tests "were reviewed, vetted, in some cases even challenged or questioned by other members of the consortium," Salvator said. "It's that sort of peer reviewing process that really brings credibility to these results."

NVIDIA has been steadily focusing on its AI capabilities and applications in recent months. "We are at the iPhone moment for AI," CEO Jensen Huang said during his GTC keynote in March. At that time the company announced its DGX cloud system which portions out slivers of the supercomputer's processing power — specifically by either eight H100 or A100 chips running 60GB of VRAM (640 of memory in total). The company expanded its supercomputing portfolio with the release of DGX GH200 at Computex in May.

The Morning After: Apple’s new MacBook lineup makes much more sense

An M3 chip for every situation.


Apple’s MacBook problem was a confusing lineup of similar machines with different names, different chips, different hardware and the rest. But it may have finally solved the problem. The long-rumored 15-inch MacBook Air arrived months ago, and then Apple surprised us by delivering two MacBook Pro revisions — notably in less than 12 months — to showcase the company’s most powerful chips yet. These new M3-equipped MacBook Pro 14- and 16-inch are a clearer sign of Apple’s direction.

TMA

The company has killed off the long-suffering 13-inch MacBook Pro, and in the same stroke, put an end to an aging design and the divisive, frustrating Touch Bar. These Pro machines — especially the M3 Max models — are great for professionals, and the MacBook Airs are for everyone else.

I think, for the first time in a long time, Apple’s laptop lineup finally makes sense.

Bored Ape NFT event leads to at least 15 attendees reporting severe eye burn

Organizer Yuga Labs is ‘aware of the eye-related issues.’

TMAe

So you thought just the idea of attending an NFT event was torturous enough. At least 15 visitors at Yuga Labs’ ApeFest, a celebration of Bored Ape Yacht Club NFTs (which are still a thing), may have experienced serious eye injuries. Bloomberg reports that multiple people attending the event in Hong Kong last weekend experienced vision problems, which they suspect stem from the event’s stage lighting. Some claim doctors subsequently diagnosed them with welder’s eye, a condition caused by overexposure to ultraviolet rays. The company is apparently investigating the reports.

Every car is a smart car, and it’s a privacy nightmare

Smart cars, dumb privacy policies, terms and conditions.

Mozilla recently reported that all 25 car brands it reviewed failed its privacy tests. While all, in Mozilla’s estimation, overreached in their data collection and use policies, some even included caveats about obtaining highly invasive information. Today’s cars can collect personal information, and the fine print of user agreements describes how manufacturers get you to consent every time.

WeWork files for Chapter 11 bankruptcy protection

The company has struggled.

Another twist in the WeWork saga this week as the office space rental company has filed for bankruptcy protection. Following reports last week that the company was expected to file for Chapter 11 protection, WeWork’s shares were halted on the New York Stock Exchange on Monday. According to The New York Times, it described its bankruptcy filing as a “comprehensive reorganization” of its business. WeWork has been toiling in a real estate market shaken by rising borrowing costs while also facing the pandemic-accelerated change of millions more people working remotely

MediaTek takes on Qualcomm with its latest flagship mobile processor

The Dimensity 9300 bests the Snapdragon 8 Gen 3 chip in some key benchmarks, the company claims.


MediaTek has unveiled its flagship Dimensity 9300 mobile processor using TSMC's 3rd-generation 4nm+ technology. The company claims much improved performance and power consumption over last year's Dimensity 9200, and performance on par with Qualcomm's new Snapdragon 8 Gen 3 processor in some key benchmarks. That makes three flagship mobile system-on-chips launched in the last month (including Google's Tensor G3), showing some healthy competition in the high-end mobile processor space.

The Dimensity 9300 has what MediaTek calls an "all-big core architecture" oriented toward performance, with four ultra-large cores and four big cores, making eight altogether. That compares to the Snapdragon 8 Gen 3, which comes with a single ultra-large Cortex-X4 core, along with 5 big Cortex-A720 cores and 2 smaller Cortex-A520 cores to balance energy savings and performance.

With all that, it delivers 15 percent more performance than the Dimensity 9200 at the same power level, or 33 percent power draw at the same performance. It also allows for 40 percent more peak performance, according to the company. Mediatek also claims an AnTuTu score of 2,130,000+, which roughly matches The Snapdragon 8 Gen 3's AnTuTu score.

MediaTek is also claiming a 46 percent jump in GPU performance over the previous processor at the same power levels and higher frame rates than its rival on certain gaming benchmarks. It also offers much improved deep learning performance over the Dimensity 9200 thanks to the new APU 790 AI processor — with up to 8 times the processing speed and Stable Diffusion image generation under a second.

It also has features that improve computational photography and video, support for always-on HDR at 4K 60p, "real-time bokeh tracking" at 4K 30fps, AI processing on RAW photos and videos and support for the new Ultra HDR format in Android 14.

That's all quite impressive if accurate, though tests will need to bear those claims out. In any case, it looks like a solid alternative to Qualcomm's Snapdragon 8 Gen 3, and it's likely to appear on a number of upcoming devices, possibly including the The Vivo X100 and X100 Pro.

Post a Comment

0 Comments