Music
Trailers
DailyVideos
India
Pakistan
Afghanistan
Bangladesh
Srilanka
Nepal
Thailand
StockMarket
Business
Technology
Startup
Trending Videos
Coupons
Football
Search
Download App in Playstore
Download App
Best Collections
Technology
Over the weekend, word broke that Univision was planning to sell off Fusion Media, the brand portfolio containing a chunk of The Onion and various media properties purchased in a Gawker fire sale less than two years back. Today, the company confirmed with TechCrunch that it has &initiated a formal process to explore the sale& of both the Gizmodo Media Group and The Onion.
The company isn&t offering much information beyond whatcurrently available in the press release, but the proposed sale includes a bevy of strong media brands, including Gizmodo, Jezebel, Deadspin, Lifehacker, Splinter, The Root, Kotaku, Jalopnik, The Onion, Clickhole, The A.V. Club and The Takeout.
The Spanish language broadcaster purchased the Gizmodo assets from Gawker Media for $135 million back in 2016, after the company was rocked by a Peter Thiel-backed Hulk Hogan lawsuit. The portfolio was rolled up along with Fusion TV into Fusion Media, a millennial-focused pivot into digital by a brand that had traditionally had issues keeping up with the times.
But Univisionyear has seen a CEO shuffle and ongoing restructuring with multiple rounds of layoffs. Univision reportedly attempted to sell a stake in the company last year, but ultimately failed due to &skittish& investors.
This time out, it seems Univision is all in, though it notes that a sale is anything but guaranteed, writing, &There is no assurance that the process to explore the sale of these assets will result in any transaction or the adoption of any other strategic alternative.&
- Details
- Category: Technology
Read more: Univision validates it's checking out sale of Gizmodo Media Group and The Onion
Write comment (96 Comments)Samsunggot a new smartwatch on the way. That much seems certain. Itbeen about a year since the last big announcement, and the company is about to have two large platforms in the form of AugustNote 9 Unpacked event and BerlinIFA trade show the following month.
A couple of new tidbits, however, are fueling speculation that things might be a little different this time around. First, a trademark filing in Korea for a Samsung Galaxy Watch logo. The company dropped the Galaxy bit from its Gear line between the first and second generation watches, back in 2014.
Among the more notable changes on that device was the move from Android to Tizen, an open-source mobile operating system Samsung has continued to bear the torch for on subsequent watches. The company never really looked back on that decision, even after the arrival of Android Wear.
But 2018 has found Google making a more aggressive push around its wearable operating system. I/O saw some upgrades, following a name change to Wear OS. That, along with a smattering of online rumors, point to Samsung potentially giving Googleother mobile OS a big go.
Ithard to make the case that Google has done much to warrant another look at the operating system. The smartwatch category has largely stagnated for everyone but Apple and Fitbit, and the last couple of updates haven&t brought a lot to the table. But perhaps theresomething to be said for increased compatibility across the Galaxy line.
Last yearGear Sport found Samsung offering up a more universal piece of hardware than its traditional restrictively large devices, but a ground-up rethink of the line certainly couldn&t hurt.
- Details
- Category: Technology
Read more: Samsung's 'Galaxy Watch' trademark fuels speculation about a Wear OS gadget
Write comment (95 Comments)In a recent MIT Technology Review article, author Virginia Eubanks discusses her bookAutomating Inequality. In it, she argues that the poor are the testing ground for new technology that increases inequality— highlighting that when algorithms are used in the process of determining eligibility for/allocation of social services, it creates difficulty for people to get services, while forcing them to deal with an invasive process of personal data collection.
I&ve spoken a lot about the dangers associated with government use of face recognition in law enforcement, yet, this article opened my eyes to the unfair and potentially life threatening practice of refusing or reducing support services to citizens who may really need them — through determinations based on algorithmic data.
To some extent, we&re used to companies making arbitrary decisions about our lives — mortgages, credit card applications, car loans, etc. Yet, these decisions are based almost entirely on straightforward factors of determination — like credit score, employment and income. In the case of algorithmic determination in social services, there is bias in the form of outright surveillance in combination with forced PII share imposed upon recipients.
Eubanks gives as an example the Pittsburgh County Office of Children, Youth and Families using the Allegheny Family Screening Tool (AFST) to assess the risk of child abuse and neglect through statistical modeling. The use of the tool leads to disproportionate targeting of poor families because the data fed to the algorithms in the tool often comes from public schools, the local housing authority, unemployment services, juvenile probation services and the county police, to name just a few — basically, the data of low-income citizens who typically use these services/interact with them regularly. Conversely, data from private services such as private schools, nannies and private mental health and drug treatment services isn&t available.
Determination tools like AFST equate poverty with signs of risk of abuse, which is blatant classism — and a consequence of the dehumanization of data. Irresponsible use of AI in this capacity, like that of its use in law enforcement and government surveillance, has the real potential to ruin lives.
Taylor Owen, in his 2015 article titledThe Violence of Algorithms, described a demonstration he witnessed by intelligence analytics software company Palantir, and made two major points in response — the first being that oftentimes these systems are written by humans, based on data tagged and entered by humans, and as a result are &chock full of human bias and errors.& He then suggests that these systems are increasingly being used for violence.
&What we are in the process of building is a vast real-time, 3-D representation of the world. A permanent record of us…but where does the meaning in all this data come from& he asked, establishing an inherent issue in AI and data sets.
Historical data is useful only when it is given meaningful context, which many of these data sets are not given. When we are dealing with financial data like loans and credit cards, determinations, as I mentioned earlier, are based on numbers. While there are surely errors and mistakes made during these processes, being deemed unworthy of credit will likely not lead the police to their door.
However, a system built to predict deviancy, which uses arrest data as a main factor in determination, is not only likely to lead to police involvement — it is intended to do so.

Image courtesy of Getty Images
When we recall modern historical policies that were perfectly legal in their intention to target minority groups, Jim Crow certainly comes to mind. And letalso not forget that these laws were not declared unconstitutional until 1967, despite the Civil Rights Act of 1965.
In this context you can clearly see that according to the Constitution, Blacks have only been considered full Americans for 51 years. Currentalgorithmic biases, whether intentional or inherent, are creating a system whereby the poor and minorities are being further criminalized, and marginalized.
Clearly, there is the ethical issue around the responsibility we have as a society to do everything in our power to avoidhelping governments get better at killing people, yet the lionshare of this responsibility lies in the lap of those of us who are actually training the algorithms — and clearly, we should not be putting systems that are incapable of nuance and conscience in the position of informing authority.
In her work, Eubanks has suggested something close to a Hippocratic oath for those of us working with algorithms — an intent to do no harm, to stave off bias, to make sure that systems did not become cold, hard oppressors.
To this end, Joy Buolamwini of MIT, the founder and leader of the Algorithmic Justice League, has created a pledge to use facial analysis technology responsibly.
The pledge includes commitments like showing value for human life and dignity, which includes refusing to engage in the development of lethal autonomous weapons, and not equipping law enforcement with facial analysis products and services for unwarranted individual targeting.
This pledge is an important first step in the direction of self-regulation, which I see as the beginning of a larger grass-roots regulatory process around the use of facial recognition.
- Details
- Category: Technology
Read more: [Technology] - In the public sector, algorithms need a conscience
Write comment (99 Comments)WhatsApp just introduced a new feature designed to help its users identify the origin of information that they receive in the messaging app. For the first time, a forwarded WhatsApp message will include an indicator that marks it as forwarded. Ita small shift for the messaging platform, but potentially one that could make a big difference in the way people transmit information, especially dubious viral content, over the app.
The newest version of WhatsApp includes the feature, which marks forwarded messages in subtle but still hard to miss italicized text above the content of a message.
The forwarded message designation is meant as a measure to control the spread of viral misinformation in countries like India, where the company has 200 million users. Misinformation spread through the app has been linked to the mob killing of multiple men who were targeted by false rumors accusing them of kidnapping children. Those rumors are believed to have spread through Facebook and WhatsApp. To that end, Facebook -owned WhatsApp bought full page ads in major Indian newspapers to raise awareness about the perils of spreading misinformation.
Last week, IndiaInformation Technology Ministry issued a warning to WhatsApp specifically:
Instances of lynching of innocent people have been noticed recently because of large number of irresponsible and explosive messages filled with rumours and provocation are being circulated on WhatsApp. The unfortunate killing in many states such as Assam, Maharashtra, Karnataka, Tripura and west Bengals are deeply painful and regretable.
While the Law and order machinery is taking steps to apprehend the culprits, the abuse of platform like WhatsApp for repeated circulation of such provocative content are equally a matter of deep concern. The Ministry of Electronics and Information Technology has taken serious note of these irresponsible messages and their circulation in such platforms. Deep disapproval of such developments has been conveyed to the senior management of the WhatsApp and they have been advised that necessary remedial measures should be taken to prevent proliferation of these fake and at times motivated/sensational messages. The Government has also directed that spread of such messages should be immediately contained through the application of appropriate technology.
It has also been pointed out that such platform cannot evade accountability and responsibility specially when good technological inventions are abused by some miscreants who resort to provocative messages which lead to spread of violence.
The Government has also conveyed in no uncertain terms that WhatsApp must take immediate action to end this menace and ensure that their platform is not used for such malafide activities.
In a blog postaccompanying the new message feature, WhatsApp encouraged its users to stop and think before sharing a forwarded message.
- Details
- Category: Technology
Read more: WhatsApp now marks forwarded messages to curb the spread of deadly misinformation
Write comment (94 Comments)SolarWinds, the company behind tools like Pingdom, Papertrail, Loggly and a number of other IT management tools, today announced it has acquired Trusted Metrics, a company that helps businesses monitor incoming threats to their networks and servers. This move follows SolarWinds& acquisition of Loggly earlier this year. Among other things, Loggly also provides a number of security tools for enterprises.
Todayacquisition of Trusted Metrics is clearly part of the companystrategy to build out its security portfolio, and SolarWinds is actually rolling Trusted Metrics into a new security product called SolarWinds Threat Monitor. Like Trusted Metrics, SolarWinds Threat Monitor helps businesses protect their networks by automatically detecting suspicious activity and malware.
&When we look at the rapidly changing IT security landscape, the proliferation of mass-marketed malware and the non-discriminatory approach of cybercriminals, we believe that real-time threat monitoring and management shouldn&t be a luxury, but an affordable option for everyone,& saidSolarWinds CEO Kevin Thompson in todayannouncement. &The acquisition of Trusted Metrics will allow us to offer a new product in theSolarWindsmold—powerful, easy to use, scalable—that is designed to give businesses the ability to more easily protect IT environments and business operations.&
SolarWinds did not disclose the financial details of the transaction. Trusted Metrics was founded in 2010; although it received some seed funding, it never raised any additionalfunding rounds after that.
- Details
- Category: Technology
Read more: SolarWinds acquires real-time threat-monitoring service Trusted Metrics
Write comment (95 Comments)Apple is creating a new AI/ML team that brings together its Core ML and Siri teams under one leader in John Giannandrea.
Apple confirmed this morning that the combined Artificial Intelligence and Machine Learning team, which houses Siri, will be led by the recent hire, who came to Apple this year after an eight-year stint at Google, where he led the Machine Intelligence, Research and Search teams. Before that he founded Metaweb Technologies and Tellme.
The internal structures of the Siri and Core ML teams will remain the same, but they will now answer to Giannandrea. Appleinternal structure means that the teams will likely remain integrated across the org as they&re wedded to various projects, including developer tools, mapping, Core OS and more. ML is everywhere, basically.
In the early days, John was a senior engineer at General Magic, the legendary company founded by Apple team members in 1989, including Andy Hertzfeld, Marc Porat and Bill Atkinson. That company, though eventually a failure, generated an incredible amount of technology breakthroughs, including tiny touchscreens and software modems. General Magic also served as an insane incubator and employer of talented people; at one point Susan Kare, Tony Fadell, Andy Rubin, Megan Smith and current Apple VP of Technology Kevin Lynch all worked there.
Giannandrea spoke at TechCrunch Disrupt 2017, because our timing is impeccable. You can listen to that talk here:
The Siri and ML teams at Apple, though sharing many common goals, grew up separately. Given that &AI& in general is so central to Appleefforts across a bunch of different initiatives, it makes sense to have one, experienced person to be the buck stopper. The haphazard way that Siri has lurched forward has got to get smoothed out if Apple is going to make a huge play for improvements in the same way that itdoing with Maps. I think at some point there was a realization that doing AI/ML heavy lifting with the additional load of maintaining user data privacy was enough to carry without having to also maintain several different stacks for its ML tools. Recent releases like Create ML are external representations of the work that AppleML teams are doing internally, but that work is still too fragmented. Creating a new org sends a clear message that everyone should be on the same page about what masters they serve.
As with Maps, Apple is going to continue to build out its two-sided AI/ML teams that focus on general computation in the cloud and personalized, data-sensitive computation locally on device. With more than 1 billion devices in peoplehands that are capable of doing some of this crunching, Apple is in the process of building one of the biggest edge computing networks ever for AI. Seems like a challenge Giannandrea would be interested in.
- Details
- Category: Technology
Read more: Apple combines machine learning and Siri teams under Giannandrea
Write comment (97 Comments)Page 4788 of 5614