Music
Trailers
DailyVideos
India
Pakistan
Afghanistan
Bangladesh
Srilanka
Nepal
Thailand
StockMarket
Business
Technology
Startup
Trending Videos
Coupons
Football
Search
Download App in Playstore
Download App
Best Collections
Technology
Twitter has finally taken action against Infowars creator Alex Jones, but it isn&t what you might think.
While Apple, Facebook, Google/YouTube, Spotify and many others have removed Jones and his conspiracy-peddling organization Infowars from their platforms, Twitter has remained unmoved with its claim that Jones hasn&t violated rules on its platform.
That was helped in no small way by the mysterious removal of some tweets last week, but now Jones has been found to have violated Twitterrules, as CNET first noted.
Twitter is punishing Jones for a tweet that violates its community standards but it isn&t locking him out forever.Instead, a spokesperson for the company confirmed that Jones& account is in &read-only mode& for up to seven days.
That means he will still be able to use the service and look up content via his account, but he&ll be unable to engage with it. That means no tweets, likes, retweets, comments, etc. Healso been ordered to delete the offending tweet — more on that below — in order to qualify for a fully functioning account again.
That restoration doesn&t happen immediately, though.Twitter policy states that the read-only sin bin can last for up to seven days &depending on the nature of the violation.& We&re imagining Jones got the full one-week penalty, but we&re waiting on Twitter to confirm that.
The offending tweet inquestionis a link to a story claiming President &Trump must take action against web censorship.& It looks like the tweet has already been deleted, but not before Twitter judged that it violates its policy on abuse:
Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone elsevoice.
When you consider the things Infowars and Jones have said or written — 9/11 conspiracies, harassment of Sandy Hook victim families and more — the content in question seems fairly innocuous. Indeed, you could look at President Trumptweets and find seemingly more punishablecontent without much difficulty.
But here we are.
The weirdest part of this Twitter caning is one of the reference points that the company gave to media. These days, it is common for the company to point reporters to specific tweets that it believes encapsulate its position on an issue, or provide additional color in certain situations.
In this case, Twitter pointed us — and presumably other reporters — to this tweet from Infowars& Paul Joseph Watson:
WTF, Twitter…
- Details
- Category: Technology
Read more: Twitter puts Infowars’ Alex Jones in the ‘read-only’ sin bin for 7 days
Write comment (99 Comments)U.S. accelerator Y Combinator is expandingto China after it announced the hiring of former Microsoft and Baidu executive Qi Lu who will develop a standalone startup program that runs on Chinese soil.
Shanghai-born Lu spent 11 years with Yahoo and eight years with Microsoft before a short spell with Baidu, where he was COO and head of the firmAI research division. Now he becomes founding CEO of YC China while healso stepping into the role of Head of YC Research. YC will also expand its research team with an office in Seattle, where Lu has plenty of links.
Thereno immediate timeframe for when YC will launch its China program, which represents its first global expansion,but YC President Sam Altman told TechCrunch in an interview that the program will be based in Beijing once it is up and running. Altman said Lu will use his network and YCgrowing presence in China — it ran its first ‘Startup School& event in Beijing earlier this year — to recruit prospects who will be put into the upcoming winter program in theU.S..
Following that, YC will work to launch the China-based program as soon as possible. It appears that the details are still being sketched out, although Altman did confirm it will run independently but may lean on local partners for help. The YC President heenvisages batch programming in the U.S. and China overlapping to a point with visitors, shared mentors and potentially other interaction between the two.
Chinastartup scene has grown massively in recent years, numerous reports peg it close to that of the U.S., so it makes sense that YC, as an ‘ecosystem builder,& wants to in. But Altman believes that the benefits extend beyond YC and will strengthen its network of founders, which spans more than 1,700 startups.
&The number one asset YC has is a very special founder community,& he told TechCrunch. &The opportunity to include a lot more Chinese founders seems super valuable to everyone.Over the next decade, a significant portion of the tech companies started will be from the U.S. or China [so operating a] network across both is a huge deal.&
Altman said healso banking on Lu being the man to make YC China happen. He revealed that hespent a decade trying to hire Lu, who he described as &one of the most impressive technologists I know.&
Y Combinator President Sam Altman has often spoken of his desire to get into the Chinese market
Entering China as a foreign entity is never easy, and in the venture world it is particularly tricky because China already has an advanced ecosystem of firms with their own networks for founders, particularly in the early-stage space. But Altman is confident that YCglobal reach and roster of founders and mentors appeals to startups in China.
YC has been working to add Chinese startups to its U.S.-based programs for some time. Altman has long been keen on an expansion to China, as he discussed at our Disrupt event last year, and partnerEric Migicovsky — who co-founder Pebble — has been busy developing networks and arranging events like the Beijing one to raise its profile.
Thatseen some progress with more teams from China — and other parts of the world — taking part in YC batches, which have never been more diverse. But YC is still missing out on global talent.
According to its own data,fewer than 10 Chinese companies have passed through its corridors but that list looks like it is missing some names so the number may be higher. Clearly, though, admission are skewed towards the U.S. — the question is whether Qi Lu and creation of YC China can significantly alter that.
- Details
- Category: Technology
Read more: Y Combinator is launching a startup program in China
Write comment (93 Comments)Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.
The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and itdefinitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.
It doesn&t sound like much, but itpretty complex to execute well, which, despite a few glitches, SEER managed to do.
At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.
In imitative mode the positions of the viewereyebrows and eyelids, and the position of their head, are mirrored by SEER. Itnot perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.
Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. Ita bit creepy, but not in the way that some robots are — when you&re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.
Thatlargely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you&d never forget it was listening to everything you say. You might even tell it your problems.
This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether thata good thing or a bad one I guess we&ll find out together.
- Details
- Category: Technology
Read more: This robot maintains tender, unnerving eye contact
Write comment (98 Comments)While Elon Musk and Mark Zuckerberg debate the dangers of artificialgeneral intelligence, startups applying AI to more narrowly defined problems such as accelerating the performance of sales teams and improving the operating efficiency of manufacturing lines are building billion-dollar businesses. Narrowly defining a problem, however, is only the first step to finding valuable business applications of AI.
To find the right opportunity around which to build an AI business, startups must apply the &Goldilocksprinciple& in several different dimensions to find the sweet spot that is &just right& to begin — not too far in one dimension, not too far in another. Here are some ways for aspiring startup founders to thread the needle with their AI strategy, based on what we&ve learned from working with thousands of AI startups.
&Justright& prediction time horizons
Unlike pre-intelligence software, AI responds to the environment in which they operate; algorithms take in data and return an answer or prediction. Depending on the application, that prediction may describe an outcome in the near term, such as tomorrowweather, or an outcome many years in the future, such as whether a patient will develop cancer in 20 years. The time horizon of the algorithmprediction is critical to its usefulness and to whether it offers an opportunity to build defensibility.
Algorithms making predictions with long time horizons are difficult to evaluate and improve. For example, an algorithm may use the schedule of a contractorprevious projects to predict that a particular construction project will fall six months behind schedule and go over budget by 20 percent. Until this new project is completed, the algorithm designer and end user can only tell whether the prediction is directionally correct — that is, whether the project is falling behind or costs are higher.
Even when the final project numbers end up very close to the predicted numbers, it will be difficult to complete the feedback loop and positively reinforce the algorithm. Many factors may influence complex systems like a construction project, making it difficult to A/B test the prediction to tease out the input variables from unknown confounding factors. The more complex the system, the longer it may take the algorithm to complete a reinforcement cycle, and the more difficult it becomes to precisely train the algorithm.
While many enterprise customers are open to piloting AI solutions, startups must be able to validate the algorithmperformance in order to complete the sale. The most convincing way to validate an algorithm is by using the customerreal-time data, but this approach may be difficult to achieve during a pilot. If the startup does get access to the customerdata, the prediction time horizon should be short enough that the algorithm can be validated during the pilot period.
For most of AI history, slow computational speeds have severely limited the scope of applied AI.
Historic data, if itavailable, can serve as a stopgap to train an algorithm and temporarily validate it via backtesting. Training an algorithm making long time horizon predictions on historic data is risky because processes and environments are more likely to have changed the further back you dig into historic records, making historic data sets less descriptive of present-day conditions.
In other cases, while the historic data describing outcomes exists for you to train an algorithm, it may not capture the input variable under consideration. In the construction example, that could mean that you found out that sites using blue safety hats are more likely to complete projects on time, but since that hat color wasn&t previously helpful in managing projects, that information wasn&t recorded in the archival records. This data must be captured from scratch, which further delays your time to market.
Instead of making singular &hero& predictions with long time horizons, AI startups should build multiple algorithms making smaller, simpler predictions with short time horizons. Decomposing an environment into simpler subsystems or processes limits the number of inputs, making them easier to control for confounding factors. The BIM 360 Project IQ Team at Autodesk takes this small prediction approach to areas that contribute to construction project delays. Their models predict safety and score vendor and subcontractor quality/reliability, all of which can be measured while a project is ongoing.
Shorter time horizons make it easier for the algorithm engineer to monitor its change in performance and take action to quickly improve it, instead of being limited to backtesting on historic data. The shorter the time horizon, the shorter the algorithmfeedback loop will be. As each cycle through the feedback incrementally compounds the algorithmperformance, shorter feedback loops are better for building defensibility.
&Just right& actionability window
Most algorithms model dynamic systems and return a prediction for a human to act on. Depending on how quickly the system is changing, the algorithmoutput may not remain valid for very long: the prediction may &decay& before the user can take action. In order to be useful to the end user, the algorithm must be designed to accommodate the limitations of computing and human speed.
In a typical AI-human workflow, the human feeds input data into the algorithm, the algorithm runs calculations on that input data and returns an output that predicts a certain outcome or recommends a course of action; the human interprets that information to decide on a course of action, then takes action. The time it takes the algorithm to compute an answer and the time it takes for a human to act on the output are the two largest bottlenecks in this workflow.
For most of AI history, slow computational speeds have severely limited the scope of applied AI. An algorithmprediction depends on the input data, and the input data represents a snapshot in time at the moment it was recorded. If the environment described by the data changes faster than the algorithm can compute the input data, by the time the algorithm completes its computations and returns a prediction, the prediction will only describe a moment in the past and will not be actionable. For example, the algorithm behind the music app Shazam may have needed several hours to identify a song after first &hearing& it using the computational power of a Windows 95 computer.
The rise of cloud computing and the development of hardware specially optimized for AI computations has dramatically broadened the scope of areas where applied AI is actionable and affordable. While macro tech advancements can greatly advance applied AI, the algorithm is not totally held hostage to current limits of computation; reinforcement through training also can improve the algorithmresponse time. The more of the same example an algorithm encounters, the more quickly it can skip computations to arrive at a prediction. Thanks to advances in computation and reinforcement, today Shazam takes less than 15 seconds to identify a song.
Automating the decision and action also could help users make use of predictions that decay too quickly to wait for humans to respond. Opsani is one such company using AI to make decisions that are too numerous and fast-moving for humans to make effectively.Unlike human DevOps, who can only move so fast to optimize performance based on recommendations from an algorithm, Opsani applies AI to both identify and automatically improve operations of applications and cloud infrastructure so its customers can enjoy dramatically better performance.
Not all applications of AI can be completely automated, however, if the perceived risk is too high for end users to accept, or if regulations mandate that humans must approve the decision.
&Just right& performance minimums
Just like software startups launch when they have built a minimum viable product (MVP) in order to collect actionable feedback from initial customers, AI startups should launch when they reach the minimum algorithmic performance (MAP) required by early adopters, so that the algorithm can be trained on more diverse and fresh data sets and avoid becoming overfit to a training set.
Most applications don&t require 100 percent accuracy to be valuable. For example, a fraud detection algorithm may only immediately catch five percent of fraud cases within 24 hours of when they occur, but human fraud investigators catch 15 percent of fraud cases after a month of analysis. In this case, the MAP is zero, because the fraud detection algorithm could serve as a first filter in order to reduce the number of cases the human investigators must process. The startup can go to market immediately in order to secure access to the large volume of fraud data used for training their algorithm. Over time, the algorithms& accuracy will improve and reduce the burden on human investigators, freeing them to focus on the most complex cases.
Startups building algorithms for zero or low MAP applications will be able to launch quickly, but may be continuously looking over their shoulder for copycats, if these copycats appear before the algorithm has reached a high level of performance.
Thereno one-size-fits-all approach to moving an algorithm from the research lab to the market.
Startups attacking low MAP problems also should watch out for problems that can be solved with near 100 percent accuracy with a very small training set, where the problem being modeled is relatively simple, with few dimensions to track and few possible variations in outcome.
AI-powered contract processing is a good example of an application where the algorithmperformance plateaus quickly. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we have seen algorithms that automatically process these documents needing only a few hundred examples to train to an acceptable degree of accuracy before additional examples do little to improve the algorithm, making it easy for new entrants to match incumbents and earlier entrants in performance.
AIs built for applications where human labor is inexpensive and able to easily achieve high accuracy may need to reach a higher MAP before they can find an early adopter. Tasks requiring fine motor skills, for example, have yet to be taken over by robots because human performance sets a very high MAP to overcome. When picking up an object, the AIs powering the robotic hand must gauge an objectstiffness and weight with a high degree of accuracy, otherwise the hand will damage the object being handled. Humans can very accurately gauge these dimensions with almost no training. Startups attacking high MAP problems must invest more time and capital into acquiring enough data to reach MAP and launch.
Threading the needle
Narrow AI can demonstrate impressive gains in a wide range of applications — in the research lab. Building a business around a narrow AI application, on the other hand, requires a new playbook. This process is heavily dependent on the specific use case on all dimensions, and the performance of the algorithm is merely one starting point. Thereno one-size-fits-all approach to moving an algorithm from the research lab to the market, but we hope these ideas will provide a useful blueprint for you to begin.
- Details
- Category: Technology
Read more: Finding the Goldilocks zone for applied AI
Write comment (93 Comments)CEO John Lemp recently said that thanks to a new policy,publishers in Revcontent‘s content recommendation network &won&t ever make a cent& on false and misleading stories — at least, not from the network.
To achieve this, the company is relying on fact-checking provided by the Poynter InstituteInternational Fact Checking Network. If any two independent fact checkers from International Fact Checking flag a story from the Revcontent network as false, the companywidget will be removed, and Revcontent will not pay out any money on that story (not even revenue earned before the story was flagged).
In some ways, Revcontentapproach to fighting fake news and misinformation sounds similar to the big social media companies — Lemp, like Twitter, has said his company cannot be the &arbiter of truth,& and like Facebook, heemphasizing the need to remove the financial incentivesfor posting sensationalistic-but-misleading stories.
However, Lemp (who&sspoken in the past about using content recommendationsto reduce publishers& reliance on individual platforms) criticized the big internet companies for &arbitrarily& taking down content in response to &bad PR.& In contrast, he said Revcontent will have a fully transparent approach, one that removes the financial rewards for fake news without silencing anyone.
Lemp didn&t mention any specific takedowns, but the big story these days is Infowars. It seems like nearly everyone has been cracking down on Alex Jones& far-right, conspiracy-mongering site, removing at least some Infowars-related accounts and content in the past couple of weeks.
The Infowars story also raises the question of whether you can effectively fight fake news on a story-by-story basis, rather than completely cutting off publishers when they&ve shown themselves to consistently post misleading or falsified stories.
When asked about this, Lemp said Revcontent also has the option to completely removing publishers from the network, but he said he views that as a &last resort.&
- Details
- Category: Technology
Read more: Revcontent is trying to get rid of misinformation with help from the Poynter Institute
Write comment (93 Comments)The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device & essentially an Android-based mini tablet & would withstand any attack. Spoiler alert: it couldn&t.
First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause:
Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote:
We deposit coins into a Bitfi wallet If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only) If you successfully extract the coins and empty the wallet, this would be considered a successful hack You can then keep the coins and Bitfi will make a payment to you of $250,000 Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure
Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices.
Then, to add insult to injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device.
The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine.
Unfortunately, the latest hack may have just fulfilled all of BitFirequirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. &We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.& Tierney said. &We believe all conditions have been met.&
The end state of this crypto mess BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with:
The researchers, however, may still have the last laugh.
- Details
- Category: Technology
Read more: ‘Unhackable’ BitFi crypto wallet has been hacked
Write comment (92 Comments)Page 4450 of 5614