常用的几种增量阅读方法

常用的几种增量阅读方法,第1张

Foxit Reader | BookxNote &SuperMemo18

pdf文件分区块式增量阅读 Incremental Reading and Read Regions_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

学生党,科研狗不可错过的pdf电子书阅读神器,可把笔记导出为思维导图_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

「记忆软件」手把手教你用Supermemo系列_哔哩悔吵哔哩 (゜-゜)つロ 干杯~-bilibili

BookxNote | MarginNote &Anki

BookxNote |win版MarginNote |学习体系 |Anki_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

marginnote和anki实现增量阅读_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

RemNote Pro | Roam Search

RemNote Tutorial: PDF Editor (Pro Version)

Demonstrating my Academic Paper Reading Workflow (ft. RemNote PRO)

Roam Research

PDF Highlight Extension for Roam Research - Full Demo

Obsidian &Anki | Mochi

obsidian和Anki的连用_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

https://github.com/search?q=obsidian+pdf

https://github.com/akaalias/obsidian-extract-pdf

Spaced repetition made easy

Transfer PDF Annotations from MarginNote to PKM (ft. RemNote, Obsidian, Roam Research)

Searching, PDF Reading &Note-Taking in Add Dialog &Anki

Searching, PDF Reading &Note-Taking in Add Dialog

Anki 插件之Image Occlusion Enhanced 和Searching PDF Reading &Note Taking的配合使用_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

Notion &Notion2Anki

https://github.com/alemayhu/notion2anki

https://github.com/alemayhu/Notion-to-Anki

Polar &Anki

https://getpolarized.io/2021/02/08/Review-The-15-Best-Anki-Add-Ons-To-Boost-Your-College-Performance-In-2021.html

https://ankiweb.net/shared/info/734898866

【Anki高级 *** 作技巧】增量阅读与Anki | Polar_哔哩哔哩 (゜-゜)つロ 干杯派差~-bilibili

Incremental Reading v4.10.3 &Anki

Incremental Reading v4.10.3

【Anki高级 *** 碧羡侍作技巧】增量阅读与Anki | Polar_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

【Anki插件篇】(10)增量阅读:Incremental Reading_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

package com.fora

import java.io.IOException

import java.util.StringTokenizer

import org.apache.hadoop.conf.Configuration

import org.apache.hadoop.fs.FSDataOutputStream

import org.apache.hadoop.fs.FileStatus

import 渗芦org.apache.hadoop.fs.FileSystem

import org.apache.hadoop.fs.Path

import org.apache.hadoop.hdfs.DistributedFileSystem

import org.apache.hadoop.hdfs.protocol.DatanodeInfo

import org.apache.hadoop.io.IntWritable

import org.apache.hadoop.io.Text

import org.apache.hadoop.mapreduce.Job

import org.apache.hadoop.mapreduce.Mapper

import org.apache.hadoop.mapreduce.Reducer

import org.apache.hadoop.mapreduce.Mapper.Context

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat

import org.apache.hadoop.util.GenericOptionsParser

public class FileOperate {

public static 丛御带void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {

init()/*初始化文拆搭件*/ 

Configuration conf = new Configuration()

Job job = new Job(conf, "word count")

job.setJarByClass(FileOperate.class)

job.setMapperClass(TokenizerMapper.class)

job.setCombinerClass(IntSumReducer.class)

job.setReducerClass(IntSumReducer.class)

job.setOutputKeyClass(Text.class)

job.setOutputValueClass(IntWritable.class)

/* set the path of input and output*/ 

FileInputFormat.addInputPath(job, new Path("hdfs:///copyOftest.c"))

FileOutputFormat.setOutputPath(job, new Path("hdfs:///wordcount"))

System.exit(job.waitForCompletion(true) ? 0 : 1)

}

public static class TokenizerMapper

extends Mapper<Object, Text, Text, IntWritable>{

private final static IntWritable one = new IntWritable(1)

private Text word = new Text()

public void map(Object key, Text value, Context context) throws IOException, InterruptedException {

StringTokenizer itr = new StringTokenizer(value.toString())

while (itr.hasMoreTokens()){

word.set(itr.nextToken())

context.write(word, one)

}

}

}

public static class IntSumReducer

extends Reducer<Text,IntWritable,Text,IntWritable> {

private IntWritable result = new IntWritable()

public void reduce(Text key, Iterable<IntWritable> values, Context context)

throws IOException, InterruptedException{

int sum = 0

for (IntWritable val : values){

sum += val.get()

}

result.set(sum)

context.write(key, result)

}

}

public static void init()throws IOException {

/*copy local file to hdfs*/ 

Configuration config = new Configuration()

FileSystem hdfs = null

String srcFile = "/test.c"

String dstFile = "hdfs:///copyOftest.c"

System.out.print("copy success!\n")

hdfs = FileSystem.get(config)

Path srcPath = new Path(srcFile)

Path dstPath = new Path(dstFile)

hdfs.copyFromLocalFile(srcPath, dstPath)

String fileName = "hdfs:///copyOftest.c"

Path path = new Path(fileName)

FileStatus fileStatus =null

fileStatus = hdfs.getFileStatus(path)

System.out.println(fileStatus.getBlockSize())

FileSystem fs = FileSystem.get(config)

DistributedFileSystem hdfs1 = (DistributedFileSystem) fs

DatanodeInfo[] dataNodeStats = hdfs1.getDataNodeStats()

/*create a file on hdfs*/ 

Path Outputpath = new Path("hdfs:///output/listOfDatanode")

FSDataOutputStream outputStream = hdfs.create(Outputpath)

String[] names = new String[dataNodeStats.length]

for (int i = 0 i < dataNodeStats.length i++) {

names[i] = dataNodeStats[i].getHostName()/*get the list of datanodes*/ 

System.out.println(names[i])

/*write the list of datanodes to file on hdfs*/ 

outputStream.write(names[i].getBytes(), 0, names[i].length())

}

}

}


欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/tougao/12205334.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-21
下一篇 2023-05-21

发表评论

登录后才能评论

评论列表(0条)

保存