Instagram appears to be working on a new video tool, and it’s a clone of the feature that made TikTok so successful (FB)

instagram clips stories tiktokJane Manchun Wong

Instagram appears to be working on a new feature for Stories that allow users to create videos in a way that’s quite similar to how videos are created on popular short-form video app TikTok.

The new feature, called "Clips," was discovered by reverse engineer Jane Manchun Wong, who often finds unreleased features and additions on social platforms. According to Wong, "Clips" allows users to post videos on their Stories that are recorded in snippets and then spliced together. The feature also seems to allow users to overlay music over their clips, and adjust the length and speed of those video segments.

If those features sound familiar to you, you’re not the only one. These video-editing abilities are similar to the ones readily available on short-form video app TikTok, a platform with more than 1 billion downloads. TikTok users create content, ranging from comedic sketches to one-man-show song covers, by using the app’s ability to record multiple clips and piece them together into a 60-second masterpiece.

Furthermore, TikTok’s roots are based in now-defunct app Musical.ly, whose content was largely users lip-syncing to popular songs. TikTok still offers users a full library of songs and "soundtracks" to put in their videos, spurring on viral music-based trends like the one that made "Old Town Road" so popular.

Instagram declined to comment on Wong’s findings of the "Clips" feature. Wong told Business Insider she discovered "Clips" on Instagram in early July.

Read more: How to use TikTok, the short-form video app Gen Z loves and that’s ushering in a new era of influencers

This isn’t the first time that features on one platform have been mimicked by competing social media apps, who then recreate and integrate the similar feature into their own platform. Instagram and its parent company, Facebook, have done this several times.

In its most successful case, Instagram duplicated Snapchat’s Stories format, and quickly surpassed Snapchat in Stories users. As of January, Instagram Stories had hit 500 million daily users.

However, it remains to be seen whether Instagram — when and if it releases the "Clips" feature — will be able to draw away users from TikTok in favor of the photo-sharing platform.

NOW WATCH: Amazon is reportedly seeking a new space in New York City. Here’s why the giant canceled its HQ2 plans 5 months ago.

See Also:

SEE ALSO: Inside the rise of TikTok, the video-sharing app with 1 billion downloads that’s owned by a massive Chinese internet company

via Business Insider https://www.businessinsider.com/instagram-clips-tiktok-features-video-app-jane-manchun-wong-2019-9?utm_source=feedly&utm_medium=referral

Consumers’ Trust in Brands Has Fallen to a New Low. Surprised? Probably Not

“It will take some time to work through all of the changes we need to make, but I’m committed to getting it right.”

Doubtless that was the best thing to say—but it was too little, too late. Prior to the scandal, 79% of Facebook users said they were confident that the social media platform was “committed” to protecting the privacy of their personal information. But just one week after the Cambridge story broke, that number plummeted to 27%, a 66-point nosedive.

Not that lack of trust in tech giants like Facebook was a new thing. In 2016, this magazine reported the results of a ranking by brand consultancy Prophet. Asked to list brands they trusted in the order they trusted them, consumers stuck Facebook down at No. 200. Hardly faring much better, fellow data colossus Google sat at 130.

With consumer faith at all-time lows like this, you’d think that the tech sector’s best and brightest might have found ways to rebuild some of consumers’ lost trust—but, if anything, the news seems to have only gotten worse.

The latest evidence: Trends in Consumer Trust, a just-released study by customer-relationship management giant Salesforce. It revealed, among other things, that 59% of consumers now fear that their personal data is vulnerable to hackers, and well over half—54%—think that companies don’t operate with their customers’ best interests in mind.

“Whether you’re looking across the geographical or political world, [or] the economic and social issues across industries, there have been breaches of trust,” said Stephanie Buscemi, CMO of Salesforce, who spoke with Adweek prior to her keynote presentation on trust issues delivered earlier today at the Digital Marketing Expo & Conference (DMEXCO) in Cologne, Germany. “Customers are in the midst of a trust crisis. In particular, the tech industry is in the center of that,” Buscemi said.

And she should know. Cloud-based software maker Salesforce is the largest customer-relationship software firm in the business, so concerns about privacy, data security and the ethics surrounding data use are inescapable issues for its sector.

Trust, Buscemi believes, has assumed an equal footing with traditional brand attributes like quality and value to become an essential piece of what any company needs to offer consumers.

“It’s no longer enough to have a great product or service,” she said. “You have to build a deep relationship with your customers.”

Which sounds great in theory, of course, but what does that mean operationally? After all, trust is a state of mind, a nebulous concept and not the sort of thing a brand can simply create the way it can, for instance, introduce a new service or lower the price of a product.

In line with Salesforce’s report’s recommendations, Buscemi stressed the need for brands (all brands, not just the tech giants) to exhibit more transparency with customers and also ask for consent regarding data usage in advance of every transaction.

“There’s a lot a brand can do,” she told Adweek. “Every interaction needs to have a level of consent factored in, and every one of those interactions is an opportunity to build trust.”

DMEXCO is the digital industry’s largest trade show, so it only makes sense for the largest CRM firm to be talking about data privacy and trust issues there. But the timing of Salesforce’s report is significant for another reason. The California Consumer Privacy Act (CCPA)—widely regarded as the American analog to the E.U.’s General Data Protection Regulation—is set to take effect in January. Its provisions include requiring companies to disclose the personal data they collect, how that data is used and to whom it’s sold, and to delete that data upon request. Given that the CCPA governs not only companies based in California (which Salesforce is) but any companies doing business in that state, brand consultant David J. Deal says it’s not especially surprising that Salesforce would want to talk about trust issues now.

via AdWeek : All News https://www.adweek.com/brand-marketing/consumers-trust-in-brands-has-fallen-to-a-new-low-surprised-probably-not/

How to create an Instagram 3×3 grid post on iOS

Social media is not easy to manage and when the medium is mostly visual, as it is with Instagram, visibility is key. Unfortunately, visibility is also hard since users follow many accounts on Instagram, all trying to stand out. This has lead to the use of ‘grid’ posts on Instagram.

An  Instagram grid post is something that users have created to post larger photos. An image is basically broken down into a 3×3 grid or into 9 parts that are then posted in a certain order. When viewed on an account’s profile page, the image is easily reconstructed and seen as a whole. Here’s how to create an Instagram 3×3 grid post on iOS.

Instagram 3×3 grid post

In order to create an Instagram 3×3 grid post, you need to use an app that will break the photo down into 9 parts, and help you post them in the correct order. There are both free and paid apps that can do the job. The free apps come with a watermark and most paid apps ask that you purchase a subscription but Photo Grid – Create Grids Pics is the exception.

You can use Photo Grid – Create Grids Pics to create a 3×3 grid post on Instagram and it won’t add a watermark to it. It does show you ads but you can pay to remove them if they bother you too much. Install the app and tap the Grid option on its home screen to get started. Select a photo to break down into a grid, and then tap Next.

You can edit the photo before it’s broken down and you can also select what sort of grid it is divided into. It supports a 3×3, 2×3, and 1×3 grid among others. Once you’ve made all the edits, tap the next button at the top until you get to the screen showing you the order to post the photos in. Tap the photo numbered ‘1’ and share it on Instagram. Follow this in order and when you’ve uploaded the 9th photo, you grid post will be complete.

Visit your own Instagram profile and you will be able to see the 3×3 post.

It goes without saying that this arrangement will change when you upload another photo. In order to keep the arrangement from being disturbed, you will have to upload three photos which will push everything down by one row, but still keep the arrangement as it is.

These posts look good and encourage users to visit your profile since they’re unable to get the full/big picture from just their own feeds.

Read How to create an Instagram 3×3 grid post on iOS by Fatima Wahab on AddictiveTips – Tech tips to make you smarter

via AddictiveTips https://www.addictivetips.com/ios/create-an-instagram-3×3-grid-post-on-ios/

A.I. Is Learning From Humans. Many Humans.

At iMerit offices in Kolkata, India, employees label images that are used to teach artificial intelligence systems.CreditRebecca Conway for The New York Times

BHUBANESWAR, India — Namita Pradhan sat at a desk in downtown Bhubaneswar, India, about 40 miles from the Bay of Bengal, staring at a video recorded in a hospital on the other side of the world.

The video showed the inside of someone’s colon. Ms. Pradhan was looking for polyps, small growths in the large intestine that could lead to cancer. When she found one — they look a bit like a slimy, angry pimple — she marked it with her computer mouse and keyboard, drawing a digital circle around the tiny bulge.

She was not trained as a doctor, but she was helping to teach an artificial intelligence system that could eventually do the work of a doctor.

Ms. Pradhan was one of dozens of young Indian women and men lined up at desks on the fourth floor of a small office building. They were trained to annotate all kinds of digital images, pinpointing everything from stop signs and pedestrians in street scenes to factories and oil tankers in satellite photos.

A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.

Before an A.I. system can learn, someone has to label the data supplied to it. Humans, for example, must pinpoint the polyps. The work is vital to the creation of artificial intelligence like self-driving cars, surveillance systems and automated health care.

Tech companies keep quiet about this work. And they face growing concerns from privacy activists over the large amounts of personal data they are storing and sharing with outside businesses.

Earlier this year, I negotiated a look behind the curtain that Silicon Valley’s wizards rarely grant. I made a meandering trip across India and stopped at a facility across the street from the Superdome in downtown New Orleans. In all, I visited five offices where people are doing the endlessly repetitive work needed to teach A.I. systems, all run by a company called iMerit.

There were intestine surveyors like Ms. Pradhan and specialists in telling a good cough from a bad cough. There were language specialists and street scene identifiers. What is a pedestrian? Is that a double yellow line or a dotted white line? One day, a robotic car will need to know the difference.

ImageiMerit employees must learn unusual skills for their labeling, like spotting a problematic polyp on a human intestine.
iMerit employees must learn unusual skills for their labeling, like spotting a problematic polyp on a human intestine.CreditRebecca Conway for The New York Times

What I saw didn’t look very much like the future — or at least the automated one you might imagine. The offices could have been call centers or payment processing centers. One was a timeworn former apartment building in the middle of a low-income residential neighborhood in western Kolkata that teemed with pedestrians, auto rickshaws and street vendors.

In facilities like the one I visited in Bhubaneswar and in other cities in India, China, Nepal, the Philippines, East Africa and the United States, tens of thousands of office workers are punching a clock while they teach the machines.

Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries. The workers earn a few pennies for each label.

Based in India, iMerit labels data for many of the biggest names in the technology and automobile industries. It declined to name these clients publicly, citing confidentiality agreements. But it recently revealed that its more than 2,000 workers in nine offices around the world are contributing to an online data-labeling service from Amazon called SageMaker Ground Truth. Previously, it listed Microsoft as a client.

Image

Artwork and motivational affirmations on a display at the iMerit offices in the Metiabruz neighborhood of Kolkata, India.CreditRebecca Conway for The New York Times

One day, who knows when, artificial intelligence could hollow out the job market. But for now, it is generating relatively low-paying jobs. The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, according to the research firm Cognilytica. This kind of work, the study showed, accounted for 80 percent of the time spent building A.I. technology.

Is the work exploitative? It depends on where you live and what you’re working on. In India, it is a ticket to the middle class. In New Orleans, it’s a decent enough job. For someone working as an independent contractor, it is often a dead end.

There are skills that must be learned — like spotting signs of a disease in a video or medical scan or keeping a steady hand when drawing a digital lasso around the image of a car or a tree. In some cases, when the task involves medical videos, pornography or violent images, the work turns grisly.

“When you first see these things, it is deeply disturbing. You don’t want to go back to the work. You might not go back to the work,” said Kristy Milland, who spent years doing data-labeling work on Amazon Mechanical Turk and has become a labor activist on behalf of workers on the service.

“But for those of us who cannot afford to not go back to the work, you just do it,” Ms. Milland said.

Before traveling to India, I tried labeling images on a crowdsourcing service, drawing digital boxes around Nike logos and identifying “not safe for work” images. I was painfully inept.

Before starting this work, I had to pass a test. Even that was disheartening. The first three times, I failed. Labeling images so people could instantly search a website for retail goods — not to mention the time spent identifying crude images of naked women and sex toys as “NSFW” — wasn’t exactly inspiring.

A.I. researchers hope they can build systems that can learn from smaller amounts of data. But for the foreseeable future, human labor is essential.

“This is an expanding world, hidden beneath the technology,” said Mary Gray, an anthropologist at Microsoft and the co-author of the book Ghost Work,” which explores the data labeling market. “It is hard to take humans out of the loop.”

Image

Employees leaving iMerit offices in Bhubaneswar, India. The company, which is private, was started by Radha and Dipak Basu, who both had long careers in Silicon Valley.CreditRebecca Conway for The New York Times

Bhubaneswar is called the City of Temples. Ancient Hindu shrines rise over roadside markets at the southwestern end of the city — giant towers of stacked stone that date to the first millennium. In the city center, many streets are unpaved. Cows and feral dogs meander among the mopeds, cars and trucks.

The city — population: 830,000 — is also a rapidly growing hub for online labor. About a 15-minute drive from the temples, on a (paved) road near the city center, a white, four-story building sits behind a stone wall. Inside, there are three rooms filled with long rows of desks, each with its own wide-screen computer display. This was where Namita Pradhan spent her days labeling videos when I met her.

Ms. Pradhan, 24, grew up just outside the city and earned a degree from a local college, where she studied biology and other subjects before taking the job with iMerit. It was recommended by her brother, who was already working for the company. She lived at a hostel near her office during the week and took the bus back to her family home each weekend.

I visited the office on a temperate January day. Some of the women sitting at the long rows of desks were traditionally dressed — bright red saris, long gold earrings. Ms. Pradhan wore a green long-sleeve shirt, black pants, and white lace-up shoes as she annotated videos for a client in the United States.

Over the course of what was a typical eight-hour day, the shy 24-year-old watched about a dozen colonoscopy videos, constantly reversing the video for a closer look at individual frames.

Every so often, she would find what she was looking for. She would lasso it with a digital “bounding box.” She drew hundreds of these bounding boxes, labeling the polyps and other signs of illness, like blood clots and inflammation.

Image

Namita Pradhan, second from right, works alongside colleagues at the iMerit offices in Bhubaneswar.CreditRebecca Conway for The New York Times

Her client, a company in the United States that iMerit is not allowed to name, will eventually feed her work into an A.I. system so it can learn to identify medical conditions on its own. The colon owner is not necessarily aware the video exists. Ms. Pradhan doesn’t know where the images came from. Neither does iMerit.

Ms. Pradhan learned the task during seven days of online video calls with a nonpracticingdoctor, based in Oakland, Calif., who helps train workers at many iMerit offices. But some question whether experienced doctors and medical students should do this labeling themselves.

This work requires people “who have a medical background, and the relevant knowledge in anatomy and pathology,” said Dr. George Shih, a radiologist at Weill Cornell Medicine and NewYork-Presbyterian and the co-founder of the start-up MD.ai., which helps organizations build artificial intelligence for health care.

When we chatted about her work, Ms. Pradhan called it “quite interesting,” but tiring. As for the graphic nature of the videos? “It was disgusting at first, but then you get used to it.”

The images she labeled were grisly, but not as grisly as others handled at iMerit. Their clients are also building artificial intelligence that can identify and remove unwanted images on social networks and other online services. That means labels for pornography, graphic violence and other noxious images.

This work can be so upsetting to workers, iMerit tries to limit how much of it they see. Pornography and violence are mixed with more innocuous images, and those labeling the grisly images are sequestered in separate rooms to shield other workers, said Liz O’Sullivan, who oversaw data annotation at an A.I. start-up called Clarifai and has worked closely with iMerit on such projects.

Other labeling companies will have workers annotate unlimited numbers of these images, Ms. O’Sullivan said.

“I would not be surprised if this causes post-traumatic stress disorder — or worse. It is hard to find a company that is not ethically deplorable that will take this on,” she said. “You have to pad the porn and violence with other work, so the workers don’t have to look at porn, porn, porn, beheading, beheading, beheading.”

iMerit said in a statement it does not compel workers to look at pornography or other offensive material and only takes on the work when it can help improve monitoring systems.

Ms. Pradhan and her fellow labelers earn between $150 and $200 a month, which pulls in between $800 and $1,000 of revenue for iMerit, according to one company executive.

By United States standards, Ms. Pradhan’s salary is indecently low. But for her and many others in these offices, it is about an average salary for a data-entry job.

Image

iMerit employees Prasenjit Baidya, and his wife, Barnali Paik, at Mr. Baidya’s family home in the state of West Bengal. He said he was happy with the opportunities the work had given him.CreditRebecca Conway for The New York Times

Prasenjit Baidya grew up on a farm about 30 miles from Kolkata, the largest city in West Bengal, on the east coast of India. His parents and extended family still live in his childhood home, a cluster of brick buildings built at the turn of the 19th century. They grow rice and sunflowers in the surrounding fields and dry the seeds on rugs spread across the rooftops.

He was the first in his family to get a college education, which included a computer class. But the class didn’t teach him all that much. The room offered only one computer for every 25 students. He learned his computer skills after college, when he enrolled in a training course run by a nonprofit called Anudip. It was recommended by a friend, and it cost the equivalent of $5 a month.

Anudip runs English and computer courses across India, training about 22,000 people a year. It feeds students directly into iMerit, which its founders set up as a sister operation in 2013. Through Anudip, Mr. Baidya landed a job at an iMerit office in Kolkata, and so did his wife, Barnali Paik, who grew up in a nearby village.

Over the last six years, iMerit has hired more than 1,600 students from Anudip. It now employs about 2,500 people in total. More than 80 percent come from families with incomes below $150 a month.

Founded in 2012 and still a private company, iMerit has its employees perform digital tasks like transcribing audio files or identifying objects in photos. Businesses across the globe pay the company to use its workers, and increasingly, they assist work on artificial intelligence.

“We want to bring people from low-income backgrounds into technology — and technology jobs,” said Radha Basu, who founded Anudip and iMerit with her husband, Dipak, after long careers in Silicon Valley with the tech giants Cisco Systems and HP.

The average age of these workers is 24. Like Mr. Baidya, most of them come from rural villages. The company recently opened a new office in Metiabruz, a largely Muslim neighborhood in western Kolkata. There, it hires mostly Muslim women whose families are reluctant to let them outside the bustling area. They are not asked to look at pornographic images or violent material.

Image

Employees in a training session at the iMerit offices in Metiabruz in Kolkata.CreditRebecca Conway for The New York Times

At first, iMerit focused on simple tasks — sorting product listings for online retail sites, vetting posts on social media. But it has shifted into work that feeds artificial intelligence.

The growth of iMerit and similar companies represents a shift away from crowdsourcing services like Mechanical Turk. iMerit and its clients have greater control over how workers are trained and how the work is done.

Mr. Baidya, now a manager at iMerit, oversees an effort to label street scenes used in training driverless cars for a major company in the United States. His team analyzes and labels digital photos as well as three-dimensional images captured by Lidar, devices that measure distances using pulses of light. They spend their days drawing bounding boxes around cars, pedestrians, stop signs and power lines.

He said the work could be tedious, but it had given him a life he might not have otherwise had. He and his wife recently bought an apartment in Kolkata, within walking distance of the iMerit office where she works.

“The changes in my life — in terms of my financial situation, my experiences, my skills in English — have been a dream,” he said. “I got a chance.”

Image

Oscar Cabezas at the New Orleans office of iMerit. He joined the company when it started work on a Spanish-language digital assistant.CreditBryan Tarnowski for The New York Times

A few weeks after my trip to India, I took an Uber through downtown New Orleans. About 18 months ago, iMerit moved into one of the buildings across the street from the Superdome.

A major American tech company needed a way of labeling data for a Spanish-language version of its home digital assistant. So it sent the data to the new iMerit office in New Orleans.

After Hurricane Katrina in 2005, hundreds of construction workers and their families moved into New Orleans to help rebuild the city. Many stayed. A number of Spanish speakers came with that new work force, and the company began hiring them.

Oscar Cabezas, 23, moved with his mother to New Orleans from Colombia. His stepfather found work in construction, and after college Mr. Cabezas joined iMerit as it began working on the Spanish-language digital assistant.

He annotated everything from tweets to restaurant reviews, identifying people and places and pinpointing ambiguities. In Guatemala, for instance, “pisto” means money, but in Mexico, it means beer. “Every day was a new project,” he said.

The office has expanded into other work, serving businesses that want to keep their data within the United States. Some projects must remain stateside, for legal and security purposes.

Glenda Hernandez, 42, who was born in Guatemala, said she missed her old work on the digital assistant project. She loved to read. She reviewed books online for big publishing companies so she could get free copies, and she relished the opportunity of getting paid to read in Spanish.

Image

Glenda Hernandez, part of the iMerit staff in New Orleans, has learned to tell the difference between a good cough and a cough that could indicate illness.CreditBryan Tarnowski for The New York Times

“That was my baby,” she said of the project.

She was less interested in image tagging or projects like the one that involved annotating recordings of people coughing; it was a way to build A.I. that identifies disease symptoms of illness over the phone.

“Listening to coughs all day is kind of disgusting,” she said.

The work is easily misunderstood, said Ms. Gray, the Microsoft anthropologist. Listening to people cough all day may be disgusting, but that is also how doctors spend their days. “We don’t think of that as drudgery,” she said.

Ms. Hernandez’s work is intended to help doctors do their jobs or maybe, one day, replace them. She takes pride in that. Moments after complaining about the project, she pointed to her colleagues across the office.

“We were the cough masters,” she said.

Image

Kristy Milland of Toronto spent 14 years working for Amazon Mechanical Turk, which crowdsources data annotation tasks. Now she tries to improve conditions for people in those jobs.CreditArden Wray for The New York Times

In 2005, Kristy Milland signed up for her first job on Amazon Mechanical Turk. She was 26, and living in Toronto with her husband, who managed a local warehouse. Mechanical Turk was a way of making a little extra money.

The first project was for Amazon itself. Three photos of a storefront would pop up on her laptop, and she would choose the one that showed the front door. Amazon was building an online service similar to Google Street View, and the company needed help picking the best photos.

She made three cents for each click, or about 18 cents a minute. In 2010, her husband lost his job, and “MTurk” became a full-time gig. For two years, she worked six or seven days a week, sometimes as much as 17 hours a day. She made about $50,000 a year.

“It was enough to live on then. It wouldn’t be now,” Ms. Milland said.

The work at that time didn’t really involve A.I. For another project, she would pull information out of mortgage documents or retype names and addresses from photos of business cards, sometimes for as little as a dollar an hour.

Around 2010, she started labeling for A.I. projects. Ms. Milland tagged all sorts of data, like gory images that showed up on Twitter (which helps build A.I. that can help remove gory images from the social network) or aerial footage likely taken somewhere in the Middle East (presumably for A.I. that the military and its partners are building to identify drone targets).

Projects from American tech giants, Ms. Milland said, typically paid more than the average job — about $15 an hour. But the job didn’t come with health care or paid vacation, and the work could be mind-numbing — or downright disturbing. She called it “horrifically exploitative.” Amazon declined to comment.

Since 2012, Ms. Milland, now 40, has been part of an organization called TurkerNation, which aims to improve conditions for thousands of people who do this work. In April, after 14 years on the service, she quit.

She is in law school, and her husband makes $600 less than they pay in rent each month, which does not include utilities. So, she said, they are preparing to go into debt. But she will not go back to labeling data.

“This is a dystopian future,” she said. “And I am done.”

via NYT > Technology https://www.nytimes.com/2019/08/16/technology/ai-humans.html

Satire or Deceit? Christian Humor Site Feuds With Snopes

It’s a fake-news feud made for 2019.

On one side is Snopes, the influential fact-checking website founded 25 years ago.

On the other is the Babylon Bee, an upstart Christian satirical website that lampoons progressive ideas, Democrats, Christians and President Trump.

Image

They are fighting over how Snopes characterizes stories published by the Bee, which says Snopes has veered from its fact-checking mission by suggesting that the satirical site may be twisting its jokes to deceive readers.

“The reason we have to take it seriously is because social networks, which we depend on for our traffic, have relied upon fact-checking sources in the past to determine what’s fake news and what isn’t,” Seth Dillon, the Bee’s chief executive, said in an interview on Thursday with Shannon Bream of Fox News.

“In cases where they’re calling us fake news and lumping us in with them rather than saying this is satire, that could actually damage us,” he added. “It could put our business in jeopardy.”

Indeed, the line between misinformation and satire can be thin, and real consequences can result when it is crossed. On social media, parody can be misconstrued or misrepresented as it moves further and further from its source. And humor has been weaponized to help spread falsehoods online.

About two weeks ago, the Bee published an article that it thought was clearly satire. The piece, headlined “Georgia Lawmaker Claims Chick-Fil-A Employee Told Her To Go Back To Her Country, Later Clarifies He Actually Said ‘My Pleasure’,” was a parody of a real controversy involving a claim of racism, a counterclaim and a fair amount of outrage.

Soon after, Snopes, which investigates assertions based on their popularity or after requests from readers, published a fact check of that article that called its intent into question.

Mr. Dillon said the Bee was so frustrated by the way that Snopes had characterized its work that it had retained a law firm, but he did not say whether any legal action had been taken. David Mikkelson, a founder of Snopes, said he received a letter from a Bee lawyer complaining about the fact check, but was unaware of any legal action.

Mr. Mikkelson disputed the suggestion that his website had a political motive for fact-checking the Bee, but acknowledged that the piece in question, which has since been updated, had been poorly phrased.

“The article that people were focusing on was not worded very well,” he said. “That’s our bad. We need to own that.”

He added that Snopes was not trying to discredit the Bee. “That’s not our intent and if we have conveyed that intent, then I apologize for that,” he said.

This week, a Bee piece satirizing the episode — titled “Snopes Issues Pre-Approval Of All Statements Made During Tonight’s Democratic Debate” — became the top-performing article on Facebook related to the topic “democratic debate,” according to BuzzSumo, a social media analysis company, as first reported by BuzzFeed.

Image

A Babylon Bee article needled Snopes as having a liberal bias. 

Some conservatives said the Bee’s experience revealed political bias at Snopes. But in Mr. Mikkelson’s view, Snopes is now subject to the very kind of attack it has been accused of carrying out.

“It’s now been spun into this ridiculous conspiracy theory that seems pretty contrived to gin up outrage” and clicks, he said.

The story of the feud began with a viral Facebook post July 19 in which Erica Thomas, a Georgia state representative, said she had been told by a white man at a grocery store to “go back” to where she came from. The man later came forward, identified himself as a Democrat and disputed her account, fueling outrage on the right by those who believe reports of racism are overblown.

The Bee published its parody of the events July 22. Two days later, Snopes published its fact check of that article.

The original Snopes piece included the subheadline, “we’re not sure if fanning the flames of controversy and muddying the details of a news story classify an article as ‘satire.’” It called the Bee story a “ruse” and suggested it had been published “in an apparent attempt to maximize the online indignation.”

That language has since been removed “for tone and clarity,” according to an editors’ note atop the piece. Snopes, it says, is working to create standards for how to address humor and satire.

Image

The Babylon Bee often pokes fun at both Democrats and President Trump. 

On Twitter, Adam Ford, the founder of the Bee, described the Snopes article as a “hit piece.” He also complained that Snopes had not been as critical in another fact-check of a piece from The Onion, a satirical website that, despite its fame and absurdist articles, continues to fool unsuspecting readers.

“A clumsy mistake or an incompetent writer are insufficient explanations for publishing something like this when you position yourself as an unbiased, stalwart arbiter of truth and presume to wield the influence that comes along with that title,” he wrote.

In a recent newsletter, the Bee said a past Snopes fact-check had prompted Facebook, which was then in a fact-checking partnership with Snopes, to “threaten us with limitations and demonetization.” Facebook eventually acknowledged the mistake and said the Bee piece — about CNN buying industrial washing machines to “spin” news — “should not have been rated false in our system.”

Snopes pulled out of the Facebook partnership in February, but some critics of the recent fact-check have argued that Snopes’s actions could still affect the Bee’s Facebook presence, a suggestion Mr. Mikkelson disputes.

“We have absolutely no ability to demonetize, deplatform, blacklist anybody,” he said. “We have no means to stop anyone from publishing on a particular platform or to limit their reach.”

Snopes determines what to cover based on reader input via email, Facebook and Twitter as well as what’s trending on Google, social media and its own website searches. As a result, it often covers claims and satire that, to many, may seem obviously false or intentionally humorous.

“Some people just don’t get or are not very good at recognizing uses of sarcasm or irony or archness,” Mr. Mikkelson said.

In the Fox News appearance, Mr. Dillon, the Bee chief executive, seemed to acknowledge that.

“There’s people who aren’t familiar with us who are seeing our stuff,” he said. “So if they want to fact-check it, fine. You can rate it false, you can rate it satire, ideally, and just say ‘Hey, this came from the Bee, it’s obviously satire, they’re a well-known satire publication.’ That would be as far as it needs to go.”

via NYT > Media & Advertising https://www.nytimes.com/2019/08/03/us/snopes-babylon-bee.html

Infographic: How to write irresistible headlines, from A-Z

Read on for an alphabet’s worth of tips to craft zippy, snappy, satisfying story-toppers.

An online writer’s primary job is to get the reader to click or scroll down the page.

The best way to clear this attention-grabbing hurdle is to cook up a headline so spicy, scintillating, intriguing or alluring that it compels your audience to continue. Of course, you might never concoct a classic such as “Headless body in topless bar” or “Super Caley go ballistic, Celtic are atrocious,” but there is a plethora of proven methods to grab your reader.

Feldman Creative shares an alphabet’s worth of headline-writing tips, from A-Z, in a helpful infographic. The guidance includes the following tactics:

  • “Posing a question … remains one of the best ways to engage the reader,” the piece posits. For instance, if you’re writing about bran muffins, you might try “Would you like to banish constipation forever?” instead of “Why bran muffins are good” to create a sense of urgency. (Hopefully, you’ll avoid the bran muffin assignment, but you get the idea.)
  • What’s in it for your audience? What does she or he have to gain if they carry on reading? In the headline, tout a substantial benefit that a reader can pluck from your piece.
  • The infographic recommends: “A proven headline approach is to begin with a topical keyword phrase, followed by a colon—or dash—followed by a statement or question.” (Alternatively, to increase your pageviews, you might also try slipping in a reference to beloved baseball big boy Bartolo Colón.) Image result for bartolo colon belly gif(Image via)
  • Do’s and don’ts. Right at the top, declare your intention to share what works and what will flop regarding a relevant topic for your audience. For example, “Do’s and don’ts of turning your ferret into a competitive racer” would probably compel clicks.
  • “Decisions are based on emotions,” as the infographic states, so hit your readers directly in their pain points. Use visceral language that stirs emotions, and let your passion for your subject shine through in the headline.

There are many more good headline-writing tips in this infographic, so peruse the whole thing.

The post Infographic: How to write irresistible headlines, from A-Z appeared first on PR Daily.

via PR Daily https://www.prdaily.com/infographic-how-to-write-irresistible-headlines-from-a-z/

Photos of damaged MacBook Pro highlight the need to respond to Apple’s recall

macbook pro fire may be linked to apples recent laptop recall 1
macbook pro fire may be linked to apples recent laptop recall 3
macbook pro fire may be linked to apples recent laptop recall 2
macbook pro fire may be linked to apples recent laptop recall 4

If your MacBook Pro is part of the recall issued by Apple last month, best you get it taken care of sooner rather than later if you haven’t already done so.

The seriousness of the situation has been highlighted by an alarming set of images (above) posted by Florida resident Steve Gagne.

Apple issued the recall on June 20 over concerns that the battery in some older MacBook Pros could overheat and pose a safety risk. A few days earlier, the Pro owned by Gagne — a machine that he later learned was part of the recall — apparently caught fire without warning.

Gagne posted the photos shortly after the recall was announced, though they came to wider attention this week after being surfaced by PetaPixel. They show a badly damaged machine, partially blackened by the fire.

Recounting what happened, Gagne wrote in a Facebook post that he was settling down for the night when “the battery in my MacBook Pro blew and a small fire filled my house with smoke.”

He went on: “You can imagine how quickly I jumped out of bed. The sound of it was what first threw me for a loop; but then the smell of a strong chemical/burning smell is what got me.”

Gagne said that he usually kept the laptop “on my couch or in a basket with notebooks [and] journals,” but that thankfully, on this occasion, he’d left it on his coffee table, reducing the chances of a more catastrophic blaze.

He added that when the battery caught fire, his MacBook Pro had been in sleep mode, with the display closed and the machine unplugged.

Affected machines

Apple said the recall notice affects 15-inch, Retina display MacBook Pros “sold primarily between September 2015 and February 2017,” and does not include any other MacBook Pro units or any other Apple laptop.

To find out if your MacBook Pro needs to have its battery replaced, hit this Apple website and enter the machine’s serial number.

The tech giant is advising anyone with an affected computer to stop using it until the battery has been replaced.

There are three ways to receive the free repair — via an Apple authorized provider, by making an appointment at an Apple retail store, or by contacting Apple Support for instructions on how to mail it to the company’s repair center.

Oh, and be sure to back up all your data before handing over your MacBook Pro.

via Digital Trends https://www.digitaltrends.com/computing/macbook-pro-fire-may-be-linked-to-apples-recent-laptop-recall/