翻外墙好用软件

外国网站伕理

APIs for operations that take a long time are often asynchronous so that applications can continue with other tasks while an operation is running. Asynchronous APIs initiate an operation and then return immediately. The application is notified when the operation completes through a callback or by monitoring a file descriptor for activity (for example, when data arrives on a TCP socket).

Asynchronous applications are usually built around an event loop that waits for the next event and invokes a function to handle the event. Since the details of event loops differ between applications, libraries need to be designed carefully to integrate well with a variety of event loops.

翻外墙好用软件

A popular library with asynchronous APIs is the libcurl file transfer library that is used for making HTTP requests. It has the following (slightly simplified) event loop integration API:

#define CURL_WAIT_POLLIN    0x0001   /* Ready to read? */
#define CURL_WAIT_POLLOUT   0x0004   /* Ready to write? */

int socket_callback(CURL *easy,      /* easy handle */
                    int fd,          /* socket */
                    int what,        /* describes the socket */
                    void *userp,     /* private callback pointer */
                    void *socketp);  /* private socket pointer */

libcurl invokes the applications 如何连上外国网 to start or stop monitoring file descriptors. When the application's event loop detects file descriptor activity, the application invokes libcurl's curl_multi_socket_action() API to let the library process the file descriptor.

There are variations on this theme but generally libraries expose file descriptors and event flags (read/write/error) so the application can monitor file descriptors from its own event loop. The library then performs the read(2) or write(2) call when the file descriptor becomes ready.

翻外墙好用软件

The Linux io_uring API (pdf) can be used to implement traditional event loops that monitor file descriptors. But it also supports asynchronous system calls like 怎样可众上国外网站 and write(2) (best used when IORING_FEAT_FAST_POLL is available). The latter is interesting because it combines two syscalls into a single efficient syscall:

  1. Waiting for file descriptor activity.
  2. Reading/writing the file descriptor.

Existing applications use syscalls like epoll_wait(2), 外国网站伕理, or the old select(2) to wait for file descriptor activity. They can also use io_uring's IORING_OP_POLL_ADD to achieve the same effect.

After the file descriptor becomes ready, a second syscall like 手机怎么连接国外网络 or write(2) is required to actually perform I/O.

io_uring's asynchronous IORING_OP_READ or IORING_OP_WRITE (including variants for vectored I/O or sockets) only requires a single io_uring_enter(2) call. If io_uring sqpoll is enabled then a syscall may not even be required to submit these operations!

To summarize, it's more efficient to perform a single asynchronous read/write instead of first monitoring file descriptor activity and then performing a read(2) or write(2).

翻外墙好用软件

Existing library APIs do not fit the asynchronous read/write approach because they expect the application to wait for file descriptor activity and then for the library to invoke a syscall to perform I/O. A new model is needed where the library tells the application about I/O instead of asking the application to monitor file descriptors for activity.

快帆安卓app下载_快帆官网APP下载_快帆完整版app官方下载:2 天前 · 快帆app安卓版-快帆下载 3.5.10.10 手机版 - 河东软件园 2021年8月7日 - 河东软件园为您提供快帆安卓手机版下载,快帆APP是一款专为海外华人开发的网络加速器,让华人用户可众使用APP轻松的连接中国本地的网络,这样就能在世界...

国外网站打不开怎么办? - PC下载网—官方软件下载大全|绿色 ...:2021-4-25 · 1 14岁男孩被逮捕 原因竟然是写出了一个比特币勒索病毒! 2 网易手游《终结者2》开启双端测试 支持的机型大增! 3 顺丰快递一快递员趁大人不在家猥亵16岁少女! 4 发布《滴滴已死》文章公众号被滴滴索赔165万 无良自媒体颤抖吧! 5 全球首个“无现金国家”诞生 不是中国啊,你猜是哪里?

The concept of monitoring file descriptor activity is gone. Instead the API focusses on asynchronous I/O operations that can be implemented by the application however it sees fit.

Applications using io_uring can use IORING_OP_READ and IORING_OP_WRITE to implement asynchronous operations efficiently. Traditional applications can still use their event loops but now also perform the read(2), write(2), etc syscalls on behalf of the library.

Some libraries don't need a full set of struct aio_operations callbacks because they only perform I/O in limited ways. For example, a library that only has a Linux eventfd can instead present this simplified API:

/*
 * Return an eventfd(2) file descriptor that the application must read from and
 * call lib_eventfd_fired() when a non-zero value was read.
 */
int lib_get_eventfd(struct libobject *obj);

/*
 * The application must call this function when the eventfd returned by
 * lib_get_eventfd() read a non-zero value.
 */
void lib_eventfd_fired(struct libobject *obj);

Although this simplified API is similar to traditional event loop integration APIs it is now the application's responsibility to perform the eventfd read(2), not the library's. This way applications using io_uring can implement the read efficiently.

翻外墙好用软件

Whether it is worth eliminating the extra syscall depends on one's performance requirements. When I/O is relatively infrequent then the overhead of the additional syscall may not matter.

While working on QEMU I found that the extra read(2) on eventfds causes a measurable overhead.

翻外墙好用软件

Splitting file descriptor monitoring from I/O is suboptimal for Linux io_uring applications. Unfortunately, existing library APIs are often designed in this way. Letting the application perform asynchronous I/O on behalf of the library allows a more efficient implementation with io_uring while still supporting applications that use older event loops.

翻外墙好用软件

Avoiding bitrot in C macros

A common approach to debug messages that can be toggled at compile-time in C programs is:

#ifdef ENABLE_DEBUG
#define DPRINTF(fmt, ...) do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...)
#endif

Usually the 手机怎么连接国外网络 macro is not defined in normal builds, so the C preprocessor expands the debug printfs to nothing. No messages are printed at runtime and the program's binary size is smaller since no instructions are generated for the debug printfs.

This approach has the disadvantage that it suffers from bitrot, the tendency for source code to break over time when it is not actively built and used. Consider what happens when one of the variables used in the debug printf is not updated after being renamed:

- int r;
+ int radius;
  ...
  DPRINTF("radius %d\n", r);

The code continues to compile after r is renamed to radius because the DPRINTF() macro expands to nothing. The compiler does not syntax check the debug printf and misses that the outdated variable name r is still in use. When someone defines ENABLE_DEBUG months or years later, the compiler error becomes apparent and that person is confronted with fixing a new bug on top of whatever they were trying to debug when they enabled the debug printf!

It's actually easy to avoid this problem by writing the macro differently:

三大运营商哪个最适合玩外网的游戏? - 知乎:2021-6-26 · 上面回答的朋友回答的很简单不过是最实际能解决题主问题的办法,我再做一点补充: 网络游戏对于质量要求较高,任何一方面的问题都容易造成游戏频繁掉线,延迟。 根据题主的描述我建议先从自身情况排查: 1,游戏期间关闭占用较多网络流量较多的软件及减少相关操作

When 手机怎么连接国外网络 is not defined the macro expands to:

do {
    if (0) {
        fprintf(stderr, fmt, ...);
    }
} while (0)

What is the difference? This time the compiler parses and syntax checks the debug printf even when it is disabled. Luckily compilers are smart enough to eliminate deadcode, code that cannot be executed, so the binary size remains small.

This applies not just to debug printfs. More generally, all preprocessor conditionals suffer from bitrot. If an 如何连上外国网 can be replaced with equivalent unconditional code then it's often worth doing.

翻外墙好用软件

How to check VIRTIO feature bits inside Linux guests

VIRTIO devices have 国内怎么连接海外网络 that indicate the presence of optional features. The feature bit space is divided into core VIRTIO features (e.g. notify on empty), transport-specific features (PCI, MMIO, CCW), and device-specific features (e.g. virtio-net checksum offloading). This article shows how to check whether a feature is enabled inside Linux guests.

The feature bits are used during VIRTIO device initialization to negotiate features between the device and the driver. The device reports a fixed set of features, typically all the features that the device implementors wanted to offer from the VIRTIO specification version that they developed against. The driver also reports features, typically all the features that the driver developers wanted to offer from the VIRTIO specification version that they developed against.

Feature bit negotiation determines the subset of features supported by both the device and the driver. A new driver might not be able to enable all the features it supports if the device is too old. The same is true vice versa. This offers compatibility between devices and drivers. It also means that you don't know which features are enabled until the device and driver have negotiated them at runtime.

Where to find feature bit definitions

VIRTIO feature bits are listed in the VIRTIO specification. You can also grep the linux/virtio-*.h header files:

2021年最好的国外杀毒软件推荐 - Douban:2021-7-24 · 2021年最好的国外杀毒软件 一般情况下,我伔不太可能会遇到实际的电脑病毒。一些程序员开发恶意软件基本都是为了赚钱,但现在并没有简单的方法来传播病毒。最为常见的是勒索软件和木马程序,有些黑客还会在你的电脑上植入僵尸程序,并 ...

Here the VIRTIO_SCSI_F_INOUT (0) constant is for the 1st bit (1ull << 0). Bit-numbering can be confusing because different standards, vendors, and languages express it differently. Here it helps to think of a bit shift operation like 1 << BIT.

怎样可众上国外网站

The Linux 手机怎么连接国外网络 driver that is used for all VIRTIO devices has a sysfs file called features. This file contains the feature bits in binary representation starting with the 1st bit on the left and more significant bits to the right. The reported bits are the subset that both the device and the driver support.

To check if the virtio-blk device /dev/vda has the VIRTIO_RING_F_EVENT_IDX (29) bit set:

$ python -c "print('$(</sys/block/vda/device/driver/virtio*/features)'[29])"
01100010011101100000000000100010100

Other device types can be found through similar sysfs paths.

翻外墙好用软件

How the Linux VFS, block layer, and device drivers fit together

The Linux kernel storage stack consists of several components including the Virtual File System (VFS) layer, the block layer, and device drivers. This article gives an overview of the main objects that a device driver interacts with and their relationships to each other. Actual I/O requests are not covered, instead the focus is on the objects representing the disk.

Let's start with a diagram of the key data structures and then an explanation of how they work together.

The Virtual File System (VFS) layer

The VFS layer is where file system concepts like files and directories are handled. The VFS provides an interface that file systems like ext4, XFS, and NFS implement to register themselves with the kernel and participate in file system operations. The struct file_operations interface is the most interesting for device drivers as we are about to see.

System calls like open(2), read(2), etc are handled by the VFS and dispatched to the appropriate struct file_operations functions.

Block device nodes like /dev/sda are implemented in fs/block_dev.c, which forms a bridge between the VFS and the Linux block layer. The block layer handles the actual I/O requests and is aware of disk-specific information like capacity and block size.

The main VFS concept that device drivers need to be aware of is struct block_device_operations and the struct block_device instances that represent block devices in Linux. A struct block_device connects the VFS inode and struct file_operations interface with the block layer struct gendisk and struct request_queue.

In Linux there are separate device nodes for the whole device (/dev/sda) and its partitions (/dev/sda1, 手机怎么连接国外网络, etc). This is handled by struct block_device so that a partition has a pointer to its parent in bd_contains.

The block layer

The block layer handles I/O request queues, disk partitions, and other disk-specific functionality. Each disk is represented by a struct gendisk and may have multiple struct hd_struct partitions. There is always 外国网站伕理, a special "partition" covering the entire block device.

I/O requests are placed into queues for processing. Requests can be merged and scheduled by the block layer. Ultimately a device driver receives a request for submission to the physical device. Queues are represented by struct request_queue.

怎么连接外国网络软件

The disk device driver registers a struct genhd with the block layer and sets up the struct request_queue to receive requests that need to be submitted to the physical device.

There is one struct genhd for the entire device even though userspace may open struct block_device instances for multiple partitions on the disk. Disk partitions are not visible at the driver level because I/O requests have already had their Logical Block Address (LBA) adjusted with the partition start offset.

How it all fits together

The VFS is aware of the block layer struct gendisk. The device driver is aware of both the block layer and the VFS struct block_device. The block layer does not have direct connections to the other components but the device driver provides callbacks.

One of the interesting aspects is that a device driver may drop its reference to struct gendisk but struct block_device instances may still have their references. In this case no I/O can occur anymore because the driver has stopped the disk and the struct request_queue, but userspace processes can still call into the VFS and struct block_device_operations callbacks in the device driver can still be invoked.

Thinking about this case is why I drew the diagram and ended up writing about this topic!

翻外墙好用软件

virtio-fs has landed in QEMU 5.0!

The virtio-fs shared host<->guest file system has landed in QEMU 5.0! It consists of two parts: the QEMU -device vhost-user-fs-pci and the actual file server called virtiofsd. Guests need to have a virtio-fs driver in order to access shared file systems. In Linux the driver is called virtiofs.ko and has been upstream since Linux v5.4.

Using virtio-fs

Thanks to libvirt virtio-fs support, it's possible to share directories trees from the host with the guest like this:

连接国外网站加速软件_连接国外网站加速器 - 云+社区 - 腾讯云:加速比(speedup),是同一个任务在单处理器系统和并行处理器系统中运行消耗的时间的比率,用来衡量并行系统或程序并行化的性能和效果。另有“超线性加速比”(superlinear speedup),即加速比比处理器数更大的情况。超线性加速比很少出现。超线性加速比有几种可能的成因,如现伕计算机的存储层

The host /path/on/host directory tree can be mounted inside the guest like this:

# mount -t virtiofs mount_tag /mnt

Applications inside the guest can then access the files as if they were local files. For more information about virtio-fs, see the project website.

How it works

For the most part, -device vhost-user-fs-pci just facilitates the connection to virtiofsd where the real work happens. When guests submit file system requests they are handled directly by the virtiofsd process on the host and don't need to go via the QEMU process.

virtiofsd is a FUSE file system daemon with virtio-fs extensions. virtio-fs is built on top of the FUSE protocol and therefore supports the POSIX file system semantics that applications expect from a native Linux file system. The Linux guest driver shares a lot of code with the traditional 怎么连接外国网站 kernel module.

怎么连接外国网络软件

I have given a few presentations on virtio-fs:

  • virtio-fs: A Shared File System for Virtual Machines at FOSDEM '20, video (webm) and slides (pdf)
  • Virtio-fs for Kata Containers storage at Kata Containers Architecture Committee Call, slides (pdf)
  • virtio-fs: A Shared File System for Virtual Machines at KVM Forum 2024, video (YouTube) and 手机怎么连接外国网

怎么连接外国网络软件

A key feature of virtio-fs is the ability to directly access the host page cache, eliminating the need to copy file contents into guest RAM. This so-called DAX support is not upstream yet.

Live migration is not yet implemented. It is a little challenging to transfer all file system state to the destination host and seamlessly continue file system operation without remounting, but it should be doable.

There is a Rust implementation of virtiofsd that is close to reaching maturity and will replace the C implementation. The advantage is that Rust has better memory and thread safety than C so entire classes of bugs can be eliminated. Also, the codebase is written from scratch whereas the C implementation was a combination of several existing pieces of software that were not designed together.

Saturday, February 15, 2024

An introduction to GDB scripting in Python

Sometimes it's not humanly possible to inspect or modify data structures manually in a debugger because they are too large or complex to navigate. Think of a linked list with hundreds of elements, one of which you need to locate. Finding the needle in the haystack is only possible by scripting the debugger to automate repetitive steps.

This article gives an overview of the GNU Debugger's Python scripting support so that you can tackle debugging tasks that are not possible manually.

What scripting GDB in Python can do

GDB can load Python scripts to automate debugging tasks and to extend debugger functionality. I will focus mostly on automating debugging tasks but extending the debugger is very powerful though rarely used.

Say you want to search a linked list for a particular node:

(gdb) p node.next
...
(gdb) p node.next.next
...
(gdb) p node.next.next.next

蚂蚁ant加速器 邀请码 - 好看123:2021-5-16 · 4.antss蚂蚁加速 点击前往 网站介绍:ssr订阅地址怎么用安卓灯蓝专业版激活码如何使用2bss openconnect apk 下载在...火煎梯子下载科学加速器app怎么用蚂蚁ant加速器手机怎么连接外国网络 net express... 5.福利蚂蚁永久免费VPN加速器AIDE技术网–技术综合

Loading Python scripts

The 怎么连接外国网站 GDB command executes files ending with the .py extension in a Python interpreter. The interpreter has access to the gdb Python module that exposes debugging APIs so your script can control GDB.

$ cat my-script.py
print('Hi from Python, this is GDB {}'.format(gdb.VERSION))
$ gdb
(gdb) source my-script.py
Hi from Python, this is GDB Fedora 8.3.50.20240824-28.fc31

Notice that the gdb module is already imported. See the GDB Python API documentation for full details of this module.

It's also possible to run ad-hoc Python commands from the GDB prompt:

(gdb) py print('Hi')
Hi

Executing commands

GDB commands are executed using gdb.execute(command, from_tty, to_string). For example, gdb.execute('step') runs the step command. Output can be collected as a Python string by setting to_string to True. By default output goes to the interactive GDB session.

Although 手机怎么连接外国网 is fundamental to GDB scripting, at best it allows screen-scraping (interpreting the output string) rather than a Pythonic way of controlling GDB. There is actually a full Python API that represents the debugged program's types and values in Python. Most scripts will use this API instead of simply executing GDB commands as if simulating an interactive shell session.

Navigating program variables

The entry point to navigating program variables is 怎么连接外国网络软件. It returns a 怎么连接外国网站.

When a gdb.Value is a struct its fields can be indexed using value['field1']['child_field1'] syntax. The following example iterates a linked list:

elem = gdb.parse_and_eval('block_backends.tqh_first')
while elem:
    name = elem['name'].string()
    if name == 'drive2':
        print('Found {}'.format(elem['dev']))
        break
    elem = elem['link']['tqe_next']

This script iterates the block_backends linked list and checks the name field of each element against 如何连上外国网. When it finds "drive2" it prints the dev field of that element.

There is a lot more that GDB Python scripts can do but you'll have to check out the API documentation to learn about it.

怎样可众上国外网站

雷神加速器电脑版_雷神加速器官方下载 v6.2.2.0 破解版 ...:21 小时前 · 软件介绍 雷神加速器电脑版是一款功能强大的游戏加速器,可众让用户在运行游戏的时候出现的网络不稳定问题,得到有效的解决,可众让用户运行更多的外国网络游戏,对其进行一键加速,从而让用户更加便捷、高效、顺畅的进行游戏运行。 雷神加速器电脑版软件特色

如何连上外国网

Video for "virtio-fs: a shared file system for virtual machines" at FOSDEM '20 now available

The video and slides from my virtio-fs talk at FOSDEM '20 are now available!

virtio-fs is a shared file system that lets guests access a directory on the host. It can be used for many things, including secure containers, booting from a root directory, and testing code inside a guest.

The talk explains how virtio-fs works, including the Linux FUSE protocol that it's based on and how FUSE concepts are mapped to VIRTIO.

virtio-fs guest drivers have been available since Linux v5.4 and QEMU support will be available from QEMU v5.0 onwards.

Video (webm) (mp4)

Slides (PDF)

Older Posts Home